2025-12-13 06:52:39,059 p=31218 u=zuul n=ansible | Starting galaxy collection install process 2025-12-13 06:52:39,060 p=31218 u=zuul n=ansible | Process install dependency map 2025-12-13 06:52:53,577 p=31218 u=zuul n=ansible | Starting collection install process 2025-12-13 06:52:53,578 p=31218 u=zuul n=ansible | Installing 'cifmw.general:1.0.0+b9f05e2b' to '/home/zuul/.ansible/collections/ansible_collections/cifmw/general' 2025-12-13 06:52:54,024 p=31218 u=zuul n=ansible | Created collection for cifmw.general:1.0.0+b9f05e2b at /home/zuul/.ansible/collections/ansible_collections/cifmw/general 2025-12-13 06:52:54,024 p=31218 u=zuul n=ansible | cifmw.general:1.0.0+b9f05e2b was installed successfully 2025-12-13 06:52:54,024 p=31218 u=zuul n=ansible | Installing 'containers.podman:1.16.2' to '/home/zuul/.ansible/collections/ansible_collections/containers/podman' 2025-12-13 06:52:54,076 p=31218 u=zuul n=ansible | Created collection for containers.podman:1.16.2 at /home/zuul/.ansible/collections/ansible_collections/containers/podman 2025-12-13 06:52:54,076 p=31218 u=zuul n=ansible | containers.podman:1.16.2 was installed successfully 2025-12-13 06:52:54,076 p=31218 u=zuul n=ansible | Installing 'community.general:10.0.1' to '/home/zuul/.ansible/collections/ansible_collections/community/general' 2025-12-13 06:52:54,738 p=31218 u=zuul n=ansible | Created collection for community.general:10.0.1 at /home/zuul/.ansible/collections/ansible_collections/community/general 2025-12-13 06:52:54,738 p=31218 u=zuul n=ansible | community.general:10.0.1 was installed successfully 2025-12-13 06:52:54,738 p=31218 u=zuul n=ansible | Installing 'ansible.posix:1.6.2' to '/home/zuul/.ansible/collections/ansible_collections/ansible/posix' 2025-12-13 06:52:54,784 p=31218 u=zuul n=ansible | Created collection for ansible.posix:1.6.2 at /home/zuul/.ansible/collections/ansible_collections/ansible/posix 2025-12-13 06:52:54,784 p=31218 u=zuul n=ansible | ansible.posix:1.6.2 was installed successfully 2025-12-13 06:52:54,784 p=31218 u=zuul n=ansible | Installing 'ansible.utils:5.1.2' to '/home/zuul/.ansible/collections/ansible_collections/ansible/utils' 2025-12-13 06:52:54,872 p=31218 u=zuul n=ansible | Created collection for ansible.utils:5.1.2 at /home/zuul/.ansible/collections/ansible_collections/ansible/utils 2025-12-13 06:52:54,872 p=31218 u=zuul n=ansible | ansible.utils:5.1.2 was installed successfully 2025-12-13 06:52:54,873 p=31218 u=zuul n=ansible | Installing 'community.libvirt:1.3.0' to '/home/zuul/.ansible/collections/ansible_collections/community/libvirt' 2025-12-13 06:52:54,895 p=31218 u=zuul n=ansible | Created collection for community.libvirt:1.3.0 at /home/zuul/.ansible/collections/ansible_collections/community/libvirt 2025-12-13 06:52:54,895 p=31218 u=zuul n=ansible | community.libvirt:1.3.0 was installed successfully 2025-12-13 06:52:54,895 p=31218 u=zuul n=ansible | Installing 'community.crypto:2.22.3' to '/home/zuul/.ansible/collections/ansible_collections/community/crypto' 2025-12-13 06:52:55,025 p=31218 u=zuul n=ansible | Created collection for community.crypto:2.22.3 at /home/zuul/.ansible/collections/ansible_collections/community/crypto 2025-12-13 06:52:55,025 p=31218 u=zuul n=ansible | community.crypto:2.22.3 was installed successfully 2025-12-13 06:52:55,025 p=31218 u=zuul n=ansible | Installing 'kubernetes.core:5.0.0' to '/home/zuul/.ansible/collections/ansible_collections/kubernetes/core' 2025-12-13 06:52:55,132 p=31218 u=zuul n=ansible | Created collection for kubernetes.core:5.0.0 at /home/zuul/.ansible/collections/ansible_collections/kubernetes/core 2025-12-13 06:52:55,132 p=31218 u=zuul n=ansible | kubernetes.core:5.0.0 was installed successfully 2025-12-13 06:52:55,132 p=31218 u=zuul n=ansible | Installing 'ansible.netcommon:7.1.0' to '/home/zuul/.ansible/collections/ansible_collections/ansible/netcommon' 2025-12-13 06:52:55,193 p=31218 u=zuul n=ansible | Created collection for ansible.netcommon:7.1.0 at /home/zuul/.ansible/collections/ansible_collections/ansible/netcommon 2025-12-13 06:52:55,193 p=31218 u=zuul n=ansible | ansible.netcommon:7.1.0 was installed successfully 2025-12-13 06:52:55,193 p=31218 u=zuul n=ansible | Installing 'openstack.config_template:2.1.1' to '/home/zuul/.ansible/collections/ansible_collections/openstack/config_template' 2025-12-13 06:52:55,209 p=31218 u=zuul n=ansible | Created collection for openstack.config_template:2.1.1 at /home/zuul/.ansible/collections/ansible_collections/openstack/config_template 2025-12-13 06:52:55,209 p=31218 u=zuul n=ansible | openstack.config_template:2.1.1 was installed successfully 2025-12-13 06:52:55,209 p=31218 u=zuul n=ansible | Installing 'junipernetworks.junos:9.1.0' to '/home/zuul/.ansible/collections/ansible_collections/junipernetworks/junos' 2025-12-13 06:52:55,417 p=31218 u=zuul n=ansible | Created collection for junipernetworks.junos:9.1.0 at /home/zuul/.ansible/collections/ansible_collections/junipernetworks/junos 2025-12-13 06:52:55,418 p=31218 u=zuul n=ansible | junipernetworks.junos:9.1.0 was installed successfully 2025-12-13 06:52:55,418 p=31218 u=zuul n=ansible | Installing 'cisco.ios:9.0.3' to '/home/zuul/.ansible/collections/ansible_collections/cisco/ios' 2025-12-13 06:52:55,661 p=31218 u=zuul n=ansible | Created collection for cisco.ios:9.0.3 at /home/zuul/.ansible/collections/ansible_collections/cisco/ios 2025-12-13 06:52:55,661 p=31218 u=zuul n=ansible | cisco.ios:9.0.3 was installed successfully 2025-12-13 06:52:55,661 p=31218 u=zuul n=ansible | Installing 'mellanox.onyx:1.0.0' to '/home/zuul/.ansible/collections/ansible_collections/mellanox/onyx' 2025-12-13 06:52:55,690 p=31218 u=zuul n=ansible | Created collection for mellanox.onyx:1.0.0 at /home/zuul/.ansible/collections/ansible_collections/mellanox/onyx 2025-12-13 06:52:55,690 p=31218 u=zuul n=ansible | mellanox.onyx:1.0.0 was installed successfully 2025-12-13 06:52:55,690 p=31218 u=zuul n=ansible | Installing 'community.okd:4.0.0' to '/home/zuul/.ansible/collections/ansible_collections/community/okd' 2025-12-13 06:52:55,716 p=31218 u=zuul n=ansible | Created collection for community.okd:4.0.0 at /home/zuul/.ansible/collections/ansible_collections/community/okd 2025-12-13 06:52:55,716 p=31218 u=zuul n=ansible | community.okd:4.0.0 was installed successfully 2025-12-13 06:52:55,716 p=31218 u=zuul n=ansible | Installing '@NAMESPACE@.@NAME@:3.1.4' to '/home/zuul/.ansible/collections/ansible_collections/@NAMESPACE@/@NAME@' 2025-12-13 06:52:55,795 p=31218 u=zuul n=ansible | Created collection for @NAMESPACE@.@NAME@:3.1.4 at /home/zuul/.ansible/collections/ansible_collections/@NAMESPACE@/@NAME@ 2025-12-13 06:52:55,795 p=31218 u=zuul n=ansible | @NAMESPACE@.@NAME@:3.1.4 was installed successfully 2025-12-13 06:53:03,866 p=31853 u=zuul n=ansible | PLAY [Remove status flag] ****************************************************** 2025-12-13 06:53:03,883 p=31853 u=zuul n=ansible | TASK [Gathering Facts ] ******************************************************** 2025-12-13 06:53:03,883 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:53:03 +0000 (0:00:00.033) 0:00:00.033 ***** 2025-12-13 06:53:03,883 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:53:03 +0000 (0:00:00.032) 0:00:00.032 ***** 2025-12-13 06:53:04,762 p=31853 u=zuul n=ansible | ok: [localhost] 2025-12-13 06:53:04,776 p=31853 u=zuul n=ansible | TASK [Delete success flag if exists path={{ ansible_user_dir }}/cifmw-success, state=absent] *** 2025-12-13 06:53:04,776 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:53:04 +0000 (0:00:00.892) 0:00:00.926 ***** 2025-12-13 06:53:04,776 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:53:04 +0000 (0:00:00.892) 0:00:00.924 ***** 2025-12-13 06:53:05,004 p=31853 u=zuul n=ansible | ok: [localhost] 2025-12-13 06:53:05,010 p=31853 u=zuul n=ansible | TASK [Inherit from parent scenarios if needed _raw_params=ci/playbooks/tasks/inherit_parent_scenario.yml] *** 2025-12-13 06:53:05,010 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:53:05 +0000 (0:00:00.233) 0:00:01.160 ***** 2025-12-13 06:53:05,010 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:53:05 +0000 (0:00:00.233) 0:00:01.158 ***** 2025-12-13 06:53:05,025 p=31853 u=zuul n=ansible | included: /home/zuul/src/github.com/openstack-k8s-operators/ci-framework/ci/playbooks/tasks/inherit_parent_scenario.yml for localhost 2025-12-13 06:53:05,066 p=31853 u=zuul n=ansible | TASK [Inherit from parent parameter file if instructed file={{ item }}] ******** 2025-12-13 06:53:05,066 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:53:05 +0000 (0:00:00.056) 0:00:01.216 ***** 2025-12-13 06:53:05,066 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:53:05 +0000 (0:00:00.056) 0:00:01.214 ***** 2025-12-13 06:53:05,085 p=31853 u=zuul n=ansible | skipping: [localhost] 2025-12-13 06:53:05,090 p=31853 u=zuul n=ansible | TASK [cifmw_setup : Set custom cifmw PATH reusable fact cifmw_path={{ ansible_user_dir }}/.crc/bin:{{ ansible_user_dir }}/.crc/bin/oc:{{ ansible_user_dir }}/bin:{{ ansible_env.PATH }}, cacheable=True] *** 2025-12-13 06:53:05,091 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:53:05 +0000 (0:00:00.024) 0:00:01.240 ***** 2025-12-13 06:53:05,091 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:53:05 +0000 (0:00:00.024) 0:00:01.239 ***** 2025-12-13 06:53:05,112 p=31853 u=zuul n=ansible | ok: [localhost] 2025-12-13 06:53:05,118 p=31853 u=zuul n=ansible | TASK [cifmw_setup : Get customized parameters ci_framework_params={{ hostvars[inventory_hostname] | dict2items | selectattr("key", "match", "^(cifmw|pre|post)_(?!install_yamls|openshift_token|openshift_login|openshift_kubeconfig).*") | list | items2dict }}] *** 2025-12-13 06:53:05,118 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:53:05 +0000 (0:00:00.027) 0:00:01.268 ***** 2025-12-13 06:53:05,118 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:53:05 +0000 (0:00:00.027) 0:00:01.267 ***** 2025-12-13 06:53:05,171 p=31853 u=zuul n=ansible | ok: [localhost] 2025-12-13 06:53:05,177 p=31853 u=zuul n=ansible | TASK [install_ca : Ensure target directory exists path={{ cifmw_install_ca_trust_dir }}, state=directory, mode=0755] *** 2025-12-13 06:53:05,177 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:53:05 +0000 (0:00:00.058) 0:00:01.327 ***** 2025-12-13 06:53:05,177 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:53:05 +0000 (0:00:00.058) 0:00:01.325 ***** 2025-12-13 06:53:05,357 p=31853 u=zuul n=ansible | ok: [localhost] 2025-12-13 06:53:05,364 p=31853 u=zuul n=ansible | TASK [install_ca : Install internal CA from url url={{ cifmw_install_ca_url }}, dest={{ cifmw_install_ca_trust_dir }}, validate_certs={{ cifmw_install_ca_url_validate_certs | default(omit) }}, mode=0644] *** 2025-12-13 06:53:05,364 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:53:05 +0000 (0:00:00.186) 0:00:01.513 ***** 2025-12-13 06:53:05,364 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:53:05 +0000 (0:00:00.186) 0:00:01.512 ***** 2025-12-13 06:53:05,380 p=31853 u=zuul n=ansible | skipping: [localhost] 2025-12-13 06:53:05,386 p=31853 u=zuul n=ansible | TASK [install_ca : Install custom CA bundle from inline dest={{ cifmw_install_ca_trust_dir }}/cifmw_inline_ca_bundle.crt, content={{ cifmw_install_ca_bundle_inline }}, mode=0644] *** 2025-12-13 06:53:05,386 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:53:05 +0000 (0:00:00.022) 0:00:01.536 ***** 2025-12-13 06:53:05,387 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:53:05 +0000 (0:00:00.022) 0:00:01.535 ***** 2025-12-13 06:53:05,403 p=31853 u=zuul n=ansible | skipping: [localhost] 2025-12-13 06:53:05,409 p=31853 u=zuul n=ansible | TASK [install_ca : Install custom CA bundle from file dest={{ cifmw_install_ca_trust_dir }}/{{ cifmw_install_ca_bundle_src | basename }}, src={{ cifmw_install_ca_bundle_src }}, mode=0644] *** 2025-12-13 06:53:05,409 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:53:05 +0000 (0:00:00.022) 0:00:01.559 ***** 2025-12-13 06:53:05,409 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:53:05 +0000 (0:00:00.022) 0:00:01.558 ***** 2025-12-13 06:53:05,426 p=31853 u=zuul n=ansible | skipping: [localhost] 2025-12-13 06:53:05,432 p=31853 u=zuul n=ansible | TASK [install_ca : Update ca bundle _raw_params=update-ca-trust] *************** 2025-12-13 06:53:05,432 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:53:05 +0000 (0:00:00.022) 0:00:01.582 ***** 2025-12-13 06:53:05,432 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:53:05 +0000 (0:00:00.022) 0:00:01.580 ***** 2025-12-13 06:53:06,648 p=31853 u=zuul n=ansible | changed: [localhost] 2025-12-13 06:53:06,658 p=31853 u=zuul n=ansible | TASK [repo_setup : Ensure directories are present path={{ cifmw_repo_setup_basedir }}/{{ item }}, state=directory, mode=0755] *** 2025-12-13 06:53:06,658 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:53:06 +0000 (0:00:01.225) 0:00:02.808 ***** 2025-12-13 06:53:06,658 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:53:06 +0000 (0:00:01.225) 0:00:02.806 ***** 2025-12-13 06:53:06,823 p=31853 u=zuul n=ansible | changed: [localhost] => (item=tmp) 2025-12-13 06:53:06,971 p=31853 u=zuul n=ansible | changed: [localhost] => (item=artifacts/repositories) 2025-12-13 06:53:07,132 p=31853 u=zuul n=ansible | changed: [localhost] => (item=venv/repo_setup) 2025-12-13 06:53:07,140 p=31853 u=zuul n=ansible | TASK [repo_setup : Make sure git-core package is installed name=git-core, state=present] *** 2025-12-13 06:53:07,140 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:53:07 +0000 (0:00:00.482) 0:00:03.290 ***** 2025-12-13 06:53:07,140 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:53:07 +0000 (0:00:00.482) 0:00:03.288 ***** 2025-12-13 06:53:07,983 p=31853 u=zuul n=ansible | ok: [localhost] 2025-12-13 06:53:07,989 p=31853 u=zuul n=ansible | TASK [repo_setup : Get repo-setup repository accept_hostkey=True, dest={{ cifmw_repo_setup_basedir }}/tmp/repo-setup, repo={{ cifmw_repo_setup_src }}] *** 2025-12-13 06:53:07,989 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:53:07 +0000 (0:00:00.848) 0:00:04.139 ***** 2025-12-13 06:53:07,989 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:53:07 +0000 (0:00:00.848) 0:00:04.137 ***** 2025-12-13 06:53:08,983 p=31853 u=zuul n=ansible | changed: [localhost] 2025-12-13 06:53:08,990 p=31853 u=zuul n=ansible | TASK [repo_setup : Initialize python venv and install requirements virtualenv={{ cifmw_repo_setup_venv }}, requirements={{ cifmw_repo_setup_basedir }}/tmp/repo-setup/requirements.txt, virtualenv_command=python3 -m venv --system-site-packages --upgrade-deps] *** 2025-12-13 06:53:08,990 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:53:08 +0000 (0:00:01.000) 0:00:05.139 ***** 2025-12-13 06:53:08,990 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:53:08 +0000 (0:00:01.000) 0:00:05.138 ***** 2025-12-13 06:53:17,227 p=31853 u=zuul n=ansible | changed: [localhost] 2025-12-13 06:53:17,233 p=31853 u=zuul n=ansible | TASK [repo_setup : Install repo-setup package chdir={{ cifmw_repo_setup_basedir }}/tmp/repo-setup, creates={{ cifmw_repo_setup_venv }}/bin/repo-setup, _raw_params={{ cifmw_repo_setup_venv }}/bin/python setup.py install] *** 2025-12-13 06:53:17,234 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:53:17 +0000 (0:00:08.243) 0:00:13.383 ***** 2025-12-13 06:53:17,234 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:53:17 +0000 (0:00:08.243) 0:00:13.382 ***** 2025-12-13 06:53:17,929 p=31853 u=zuul n=ansible | changed: [localhost] 2025-12-13 06:53:17,936 p=31853 u=zuul n=ansible | TASK [repo_setup : Set cifmw_repo_setup_dlrn_hash_tag from content provider cifmw_repo_setup_dlrn_hash_tag={{ content_provider_dlrn_md5_hash }}] *** 2025-12-13 06:53:17,936 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:53:17 +0000 (0:00:00.702) 0:00:14.085 ***** 2025-12-13 06:53:17,936 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:53:17 +0000 (0:00:00.702) 0:00:14.084 ***** 2025-12-13 06:53:17,952 p=31853 u=zuul n=ansible | skipping: [localhost] 2025-12-13 06:53:17,958 p=31853 u=zuul n=ansible | TASK [repo_setup : Run repo-setup _raw_params={{ cifmw_repo_setup_venv }}/bin/repo-setup {{ cifmw_repo_setup_promotion }} {{ cifmw_repo_setup_additional_repos }} -d {{ cifmw_repo_setup_os_release }}{{ cifmw_repo_setup_dist_major_version }} -b {{ cifmw_repo_setup_branch }} --rdo-mirror {{ cifmw_repo_setup_rdo_mirror }} {% if cifmw_repo_setup_dlrn_hash_tag | length > 0 %} --dlrn-hash-tag {{ cifmw_repo_setup_dlrn_hash_tag }} {% endif %} -o {{ cifmw_repo_setup_output }}] *** 2025-12-13 06:53:17,958 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:53:17 +0000 (0:00:00.022) 0:00:14.108 ***** 2025-12-13 06:53:17,958 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:53:17 +0000 (0:00:00.022) 0:00:14.106 ***** 2025-12-13 06:53:18,749 p=31853 u=zuul n=ansible | changed: [localhost] 2025-12-13 06:53:18,755 p=31853 u=zuul n=ansible | TASK [repo_setup : Get component repo url={{ cifmw_repo_setup_dlrn_uri }}/{{ cifmw_repo_setup_os_release }}{{ cifmw_repo_setup_dist_major_version }}-{{ cifmw_repo_setup_branch }}/component/{{ cifmw_repo_setup_component_name }}/{{ cifmw_repo_setup_component_promotion_tag }}/delorean.repo, dest={{ cifmw_repo_setup_output }}/{{ cifmw_repo_setup_component_name }}_{{ cifmw_repo_setup_component_promotion_tag }}_delorean.repo, mode=0644] *** 2025-12-13 06:53:18,756 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:53:18 +0000 (0:00:00.797) 0:00:14.905 ***** 2025-12-13 06:53:18,756 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:53:18 +0000 (0:00:00.797) 0:00:14.904 ***** 2025-12-13 06:53:18,781 p=31853 u=zuul n=ansible | skipping: [localhost] 2025-12-13 06:53:18,787 p=31853 u=zuul n=ansible | TASK [repo_setup : Rename component repo path={{ cifmw_repo_setup_output }}/{{ cifmw_repo_setup_component_name }}_{{ cifmw_repo_setup_component_promotion_tag }}_delorean.repo, regexp=delorean-component-{{ cifmw_repo_setup_component_name }}, replace={{ cifmw_repo_setup_component_name }}-{{ cifmw_repo_setup_component_promotion_tag }}] *** 2025-12-13 06:53:18,787 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:53:18 +0000 (0:00:00.031) 0:00:14.936 ***** 2025-12-13 06:53:18,787 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:53:18 +0000 (0:00:00.031) 0:00:14.935 ***** 2025-12-13 06:53:18,811 p=31853 u=zuul n=ansible | skipping: [localhost] 2025-12-13 06:53:18,817 p=31853 u=zuul n=ansible | TASK [repo_setup : Disable component repo in current-podified dlrn repo path={{ cifmw_repo_setup_output }}/delorean.repo, section=delorean-component-{{ cifmw_repo_setup_component_name }}, option=enabled, value=0, mode=0644] *** 2025-12-13 06:53:18,817 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:53:18 +0000 (0:00:00.030) 0:00:14.967 ***** 2025-12-13 06:53:18,818 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:53:18 +0000 (0:00:00.030) 0:00:14.966 ***** 2025-12-13 06:53:18,843 p=31853 u=zuul n=ansible | skipping: [localhost] 2025-12-13 06:53:18,849 p=31853 u=zuul n=ansible | TASK [repo_setup : Run repo-setup-get-hash _raw_params={{ cifmw_repo_setup_venv }}/bin/repo-setup-get-hash --dlrn-url {{ cifmw_repo_setup_dlrn_uri[:-1] }} --os-version {{ cifmw_repo_setup_os_release }}{{ cifmw_repo_setup_dist_major_version }} --release {{ cifmw_repo_setup_branch }} {% if cifmw_repo_setup_component_name | length > 0 -%} --component {{ cifmw_repo_setup_component_name }} --tag {{ cifmw_repo_setup_component_promotion_tag }} {% else -%} --tag {{cifmw_repo_setup_promotion }} {% endif -%} {% if (cifmw_repo_setup_dlrn_hash_tag | length > 0) and (cifmw_repo_setup_component_name | length <= 0) -%} --dlrn-hash-tag {{ cifmw_repo_setup_dlrn_hash_tag }} {% endif -%} --json] *** 2025-12-13 06:53:18,849 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:53:18 +0000 (0:00:00.031) 0:00:14.999 ***** 2025-12-13 06:53:18,849 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:53:18 +0000 (0:00:00.031) 0:00:14.997 ***** 2025-12-13 06:53:19,364 p=31853 u=zuul n=ansible | changed: [localhost] 2025-12-13 06:53:19,369 p=31853 u=zuul n=ansible | TASK [repo_setup : Dump full hash in delorean.repo.md5 file content={{ _repo_setup_json['full_hash'] }} , dest={{ cifmw_repo_setup_basedir }}/artifacts/repositories/delorean.repo.md5, mode=0644] *** 2025-12-13 06:53:19,369 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:53:19 +0000 (0:00:00.520) 0:00:15.519 ***** 2025-12-13 06:53:19,370 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:53:19 +0000 (0:00:00.520) 0:00:15.518 ***** 2025-12-13 06:53:19,917 p=31853 u=zuul n=ansible | changed: [localhost] 2025-12-13 06:53:19,923 p=31853 u=zuul n=ansible | TASK [repo_setup : Dump current-podified hash url={{ cifmw_repo_setup_dlrn_uri }}/{{ cifmw_repo_setup_os_release }}{{ cifmw_repo_setup_dist_major_version }}-{{ cifmw_repo_setup_branch }}/current-podified/delorean.repo.md5, dest={{ cifmw_repo_setup_basedir }}/artifacts/repositories/delorean.repo.md5, mode=0644] *** 2025-12-13 06:53:19,923 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:53:19 +0000 (0:00:00.553) 0:00:16.073 ***** 2025-12-13 06:53:19,923 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:53:19 +0000 (0:00:00.553) 0:00:16.071 ***** 2025-12-13 06:53:19,935 p=31853 u=zuul n=ansible | skipping: [localhost] 2025-12-13 06:53:19,941 p=31853 u=zuul n=ansible | TASK [repo_setup : Slurp current podified hash src={{ cifmw_repo_setup_basedir }}/artifacts/repositories/delorean.repo.md5] *** 2025-12-13 06:53:19,941 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:53:19 +0000 (0:00:00.018) 0:00:16.091 ***** 2025-12-13 06:53:19,941 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:53:19 +0000 (0:00:00.018) 0:00:16.089 ***** 2025-12-13 06:53:19,954 p=31853 u=zuul n=ansible | skipping: [localhost] 2025-12-13 06:53:19,959 p=31853 u=zuul n=ansible | TASK [repo_setup : Update the value of full_hash _repo_setup_json={{ _repo_setup_json | combine({'full_hash': _hash}, recursive=true) }}] *** 2025-12-13 06:53:19,959 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:53:19 +0000 (0:00:00.018) 0:00:16.109 ***** 2025-12-13 06:53:19,959 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:53:19 +0000 (0:00:00.018) 0:00:16.108 ***** 2025-12-13 06:53:19,971 p=31853 u=zuul n=ansible | skipping: [localhost] 2025-12-13 06:53:19,978 p=31853 u=zuul n=ansible | TASK [repo_setup : Export hashes facts for further use cifmw_repo_setup_full_hash={{ _repo_setup_json['full_hash'] }}, cifmw_repo_setup_commit_hash={{ _repo_setup_json['commit_hash'] }}, cifmw_repo_setup_distro_hash={{ _repo_setup_json['distro_hash'] }}, cifmw_repo_setup_extended_hash={{ _repo_setup_json['extended_hash'] }}, cifmw_repo_setup_dlrn_api_url={{ _repo_setup_json['dlrn_api_url'] }}, cifmw_repo_setup_dlrn_url={{ _repo_setup_json['dlrn_url'] }}, cifmw_repo_setup_release={{ _repo_setup_json['release'] }}, cacheable=True] *** 2025-12-13 06:53:19,978 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:53:19 +0000 (0:00:00.018) 0:00:16.127 ***** 2025-12-13 06:53:19,978 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:53:19 +0000 (0:00:00.018) 0:00:16.126 ***** 2025-12-13 06:53:19,999 p=31853 u=zuul n=ansible | ok: [localhost] 2025-12-13 06:53:20,004 p=31853 u=zuul n=ansible | TASK [repo_setup : Create download directory path={{ cifmw_repo_setup_rhos_release_path }}, state=directory, mode=0755] *** 2025-12-13 06:53:20,004 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:53:20 +0000 (0:00:00.026) 0:00:16.154 ***** 2025-12-13 06:53:20,005 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:53:20 +0000 (0:00:00.026) 0:00:16.153 ***** 2025-12-13 06:53:20,015 p=31853 u=zuul n=ansible | skipping: [localhost] 2025-12-13 06:53:20,021 p=31853 u=zuul n=ansible | TASK [repo_setup : Print the URL to request msg={{ cifmw_repo_setup_rhos_release_rpm }}] *** 2025-12-13 06:53:20,021 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:53:20 +0000 (0:00:00.016) 0:00:16.170 ***** 2025-12-13 06:53:20,021 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:53:20 +0000 (0:00:00.016) 0:00:16.169 ***** 2025-12-13 06:53:20,031 p=31853 u=zuul n=ansible | skipping: [localhost] 2025-12-13 06:53:20,037 p=31853 u=zuul n=ansible | TASK [Download the RPM name=krb_request] *************************************** 2025-12-13 06:53:20,037 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:53:20 +0000 (0:00:00.016) 0:00:16.187 ***** 2025-12-13 06:53:20,037 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:53:20 +0000 (0:00:00.016) 0:00:16.185 ***** 2025-12-13 06:53:20,048 p=31853 u=zuul n=ansible | skipping: [localhost] 2025-12-13 06:53:20,054 p=31853 u=zuul n=ansible | TASK [repo_setup : Install RHOS Release tool name={{ cifmw_repo_setup_rhos_release_rpm if cifmw_repo_setup_rhos_release_rpm is not url else cifmw_krb_request_out.path }}, state=present, disable_gpg_check={{ cifmw_repo_setup_rhos_release_gpg_check | bool }}] *** 2025-12-13 06:53:20,055 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:53:20 +0000 (0:00:00.017) 0:00:16.204 ***** 2025-12-13 06:53:20,055 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:53:20 +0000 (0:00:00.017) 0:00:16.203 ***** 2025-12-13 06:53:20,065 p=31853 u=zuul n=ansible | skipping: [localhost] 2025-12-13 06:53:20,071 p=31853 u=zuul n=ansible | TASK [repo_setup : Get rhos-release tool version _raw_params=rhos-release --version] *** 2025-12-13 06:53:20,071 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:53:20 +0000 (0:00:00.016) 0:00:16.221 ***** 2025-12-13 06:53:20,071 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:53:20 +0000 (0:00:00.016) 0:00:16.219 ***** 2025-12-13 06:53:20,081 p=31853 u=zuul n=ansible | skipping: [localhost] 2025-12-13 06:53:20,087 p=31853 u=zuul n=ansible | TASK [repo_setup : Print rhos-release tool version msg={{ rr_version.stdout }}] *** 2025-12-13 06:53:20,087 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:53:20 +0000 (0:00:00.015) 0:00:16.236 ***** 2025-12-13 06:53:20,087 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:53:20 +0000 (0:00:00.015) 0:00:16.235 ***** 2025-12-13 06:53:20,097 p=31853 u=zuul n=ansible | skipping: [localhost] 2025-12-13 06:53:20,103 p=31853 u=zuul n=ansible | TASK [repo_setup : Generate repos using rhos-release {{ cifmw_repo_setup_rhos_release_args }} _raw_params=rhos-release {{ cifmw_repo_setup_rhos_release_args }} \ -t {{ cifmw_repo_setup_output }}] *** 2025-12-13 06:53:20,104 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:53:20 +0000 (0:00:00.016) 0:00:16.253 ***** 2025-12-13 06:53:20,104 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:53:20 +0000 (0:00:00.016) 0:00:16.252 ***** 2025-12-13 06:53:20,113 p=31853 u=zuul n=ansible | skipping: [localhost] 2025-12-13 06:53:20,119 p=31853 u=zuul n=ansible | TASK [repo_setup : Check for /etc/ci/mirror_info.sh path=/etc/ci/mirror_info.sh] *** 2025-12-13 06:53:20,119 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:53:20 +0000 (0:00:00.015) 0:00:16.269 ***** 2025-12-13 06:53:20,119 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:53:20 +0000 (0:00:00.015) 0:00:16.268 ***** 2025-12-13 06:53:20,269 p=31853 u=zuul n=ansible | ok: [localhost] 2025-12-13 06:53:20,275 p=31853 u=zuul n=ansible | TASK [repo_setup : Use RDO proxy mirrors chdir={{ cifmw_repo_setup_output }}, _raw_params=set -o pipefail source /etc/ci/mirror_info.sh sed -i -e "s|https://trunk.rdoproject.org|$NODEPOOL_RDO_PROXY|g" *.repo ] *** 2025-12-13 06:53:20,275 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:53:20 +0000 (0:00:00.156) 0:00:16.425 ***** 2025-12-13 06:53:20,275 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:53:20 +0000 (0:00:00.156) 0:00:16.424 ***** 2025-12-13 06:53:20,450 p=31853 u=zuul n=ansible | changed: [localhost] 2025-12-13 06:53:20,456 p=31853 u=zuul n=ansible | TASK [repo_setup : Use RDO CentOS mirrors (remove CentOS 10 conditional when Nodepool mirrors exist) chdir={{ cifmw_repo_setup_output }}, _raw_params=set -o pipefail source /etc/ci/mirror_info.sh sed -i -e "s|http://mirror.stream.centos.org|$NODEPOOL_CENTOS_MIRROR|g" *.repo ] *** 2025-12-13 06:53:20,456 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:53:20 +0000 (0:00:00.181) 0:00:16.606 ***** 2025-12-13 06:53:20,456 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:53:20 +0000 (0:00:00.181) 0:00:16.605 ***** 2025-12-13 06:53:20,636 p=31853 u=zuul n=ansible | changed: [localhost] 2025-12-13 06:53:20,642 p=31853 u=zuul n=ansible | TASK [repo_setup : Check for gating.repo file on content provider url=http://{{ content_provider_registry_ip }}:8766/gating.repo] *** 2025-12-13 06:53:20,642 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:53:20 +0000 (0:00:00.185) 0:00:16.792 ***** 2025-12-13 06:53:20,642 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:53:20 +0000 (0:00:00.185) 0:00:16.791 ***** 2025-12-13 06:53:51,107 p=31853 u=zuul n=ansible | fatal: [localhost]: FAILED! => changed: false elapsed: 30 msg: 'Status code was -1 and not [200]: Request failed: ' redirected: false status: -1 url: http://38.129.56.153:8766/gating.repo 2025-12-13 06:53:51,107 p=31853 u=zuul n=ansible | ...ignoring 2025-12-13 06:53:51,113 p=31853 u=zuul n=ansible | TASK [repo_setup : Populate gating repo from content provider ip content=[gating-repo] baseurl=http://{{ content_provider_registry_ip }}:8766/ enabled=1 gpgcheck=0 priority=1 , dest={{ cifmw_repo_setup_output }}/gating.repo, mode=0644] *** 2025-12-13 06:53:51,113 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:53:51 +0000 (0:00:30.471) 0:00:47.263 ***** 2025-12-13 06:53:51,114 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:53:51 +0000 (0:00:30.471) 0:00:47.262 ***** 2025-12-13 06:53:51,136 p=31853 u=zuul n=ansible | skipping: [localhost] 2025-12-13 06:53:51,142 p=31853 u=zuul n=ansible | TASK [repo_setup : Check for DLRN repo at the destination path={{ cifmw_repo_setup_output }}/delorean.repo] *** 2025-12-13 06:53:51,142 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:53:51 +0000 (0:00:00.028) 0:00:47.292 ***** 2025-12-13 06:53:51,142 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:53:51 +0000 (0:00:00.028) 0:00:47.290 ***** 2025-12-13 06:53:51,165 p=31853 u=zuul n=ansible | skipping: [localhost] 2025-12-13 06:53:51,171 p=31853 u=zuul n=ansible | TASK [repo_setup : Lower the priority of DLRN repos to allow installation from gating repo path={{ cifmw_repo_setup_output }}/delorean.repo, regexp=priority=1, replace=priority=20] *** 2025-12-13 06:53:51,171 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:53:51 +0000 (0:00:00.028) 0:00:47.321 ***** 2025-12-13 06:53:51,171 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:53:51 +0000 (0:00:00.028) 0:00:47.319 ***** 2025-12-13 06:53:51,193 p=31853 u=zuul n=ansible | skipping: [localhost] 2025-12-13 06:53:51,199 p=31853 u=zuul n=ansible | TASK [repo_setup : Check for DLRN component repo path={{ cifmw_repo_setup_output }}/{{ _comp_repo }}] *** 2025-12-13 06:53:51,199 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:53:51 +0000 (0:00:00.028) 0:00:47.349 ***** 2025-12-13 06:53:51,200 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:53:51 +0000 (0:00:00.028) 0:00:47.348 ***** 2025-12-13 06:53:51,222 p=31853 u=zuul n=ansible | skipping: [localhost] 2025-12-13 06:53:51,228 p=31853 u=zuul n=ansible | TASK [repo_setup : Lower the priority of componennt repos to allow installation from gating repo path={{ cifmw_repo_setup_output }}//{{ _comp_repo }}, regexp=priority=1, replace=priority=2] *** 2025-12-13 06:53:51,228 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:53:51 +0000 (0:00:00.028) 0:00:47.378 ***** 2025-12-13 06:53:51,228 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:53:51 +0000 (0:00:00.028) 0:00:47.376 ***** 2025-12-13 06:53:51,250 p=31853 u=zuul n=ansible | skipping: [localhost] 2025-12-13 06:53:51,256 p=31853 u=zuul n=ansible | TASK [repo_setup : Find existing repos from /etc/yum.repos.d directory paths=/etc/yum.repos.d/, patterns=*.repo, recurse=False] *** 2025-12-13 06:53:51,256 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:53:51 +0000 (0:00:00.027) 0:00:47.406 ***** 2025-12-13 06:53:51,256 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:53:51 +0000 (0:00:00.027) 0:00:47.404 ***** 2025-12-13 06:53:51,521 p=31853 u=zuul n=ansible | ok: [localhost] 2025-12-13 06:53:51,526 p=31853 u=zuul n=ansible | TASK [repo_setup : Remove existing repos from /etc/yum.repos.d directory path={{ item }}, state=absent] *** 2025-12-13 06:53:51,527 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:53:51 +0000 (0:00:00.270) 0:00:47.676 ***** 2025-12-13 06:53:51,527 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:53:51 +0000 (0:00:00.270) 0:00:47.675 ***** 2025-12-13 06:53:51,732 p=31853 u=zuul n=ansible | changed: [localhost] => (item=/etc/yum.repos.d/centos-addons.repo) 2025-12-13 06:53:51,905 p=31853 u=zuul n=ansible | changed: [localhost] => (item=/etc/yum.repos.d/centos.repo) 2025-12-13 06:53:51,911 p=31853 u=zuul n=ansible | TASK [repo_setup : Cleanup existing metadata _raw_params=dnf clean metadata] *** 2025-12-13 06:53:51,912 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:53:51 +0000 (0:00:00.385) 0:00:48.061 ***** 2025-12-13 06:53:51,912 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:53:51 +0000 (0:00:00.385) 0:00:48.060 ***** 2025-12-13 06:53:55,275 p=31853 u=zuul n=ansible | changed: [localhost] 2025-12-13 06:53:55,281 p=31853 u=zuul n=ansible | TASK [repo_setup : Copy generated repos to /etc/yum.repos.d directory mode=0755, remote_src=True, src={{ cifmw_repo_setup_output }}/, dest=/etc/yum.repos.d] *** 2025-12-13 06:53:55,281 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:53:55 +0000 (0:00:03.369) 0:00:51.431 ***** 2025-12-13 06:53:55,281 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:53:55 +0000 (0:00:03.369) 0:00:51.429 ***** 2025-12-13 06:53:55,500 p=31853 u=zuul n=ansible | changed: [localhost] 2025-12-13 06:53:55,510 p=31853 u=zuul n=ansible | TASK [ci_setup : Gather variables for each operating system _raw_params={{ item }}] *** 2025-12-13 06:53:55,510 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:53:55 +0000 (0:00:00.229) 0:00:51.660 ***** 2025-12-13 06:53:55,510 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:53:55 +0000 (0:00:00.229) 0:00:51.659 ***** 2025-12-13 06:53:55,540 p=31853 u=zuul n=ansible | ok: [localhost] => (item=/home/zuul/src/github.com/openstack-k8s-operators/ci-framework/roles/ci_setup/vars/redhat.yml) 2025-12-13 06:53:55,548 p=31853 u=zuul n=ansible | TASK [ci_setup : List packages to install var=cifmw_ci_setup_packages] ********* 2025-12-13 06:53:55,548 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:53:55 +0000 (0:00:00.037) 0:00:51.697 ***** 2025-12-13 06:53:55,548 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:53:55 +0000 (0:00:00.037) 0:00:51.696 ***** 2025-12-13 06:53:55,563 p=31853 u=zuul n=ansible | ok: [localhost] => cifmw_ci_setup_packages: - bash-completion - ca-certificates - git-core - make - tar - tmux - python3-pip 2025-12-13 06:53:55,569 p=31853 u=zuul n=ansible | TASK [ci_setup : Install needed packages name={{ cifmw_ci_setup_packages }}, state=latest] *** 2025-12-13 06:53:55,569 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:53:55 +0000 (0:00:00.021) 0:00:51.719 ***** 2025-12-13 06:53:55,569 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:53:55 +0000 (0:00:00.021) 0:00:51.718 ***** 2025-12-13 06:54:21,287 p=31853 u=zuul n=ansible | changed: [localhost] 2025-12-13 06:54:21,293 p=31853 u=zuul n=ansible | TASK [ci_setup : Gather version of openshift client _raw_params=oc version --client -o yaml] *** 2025-12-13 06:54:21,293 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:54:21 +0000 (0:00:25.723) 0:01:17.442 ***** 2025-12-13 06:54:21,293 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:54:21 +0000 (0:00:25.723) 0:01:17.441 ***** 2025-12-13 06:54:21,444 p=31853 u=zuul n=ansible | ok: [localhost] 2025-12-13 06:54:21,451 p=31853 u=zuul n=ansible | TASK [ci_setup : Ensure openshift client install path is present path={{ cifmw_ci_setup_oc_install_path }}, state=directory, mode=0755] *** 2025-12-13 06:54:21,451 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:54:21 +0000 (0:00:00.157) 0:01:17.600 ***** 2025-12-13 06:54:21,451 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:54:21 +0000 (0:00:00.157) 0:01:17.599 ***** 2025-12-13 06:54:21,610 p=31853 u=zuul n=ansible | changed: [localhost] 2025-12-13 06:54:21,616 p=31853 u=zuul n=ansible | TASK [ci_setup : Install openshift client src={{ cifmw_ci_setup_openshift_client_download_uri }}/{{ cifmw_ci_setup_openshift_client_version }}/openshift-client-linux.tar.gz, dest={{ cifmw_ci_setup_oc_install_path }}, remote_src=True, mode=0755, creates={{ cifmw_ci_setup_oc_install_path }}/oc] *** 2025-12-13 06:54:21,616 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:54:21 +0000 (0:00:00.165) 0:01:17.766 ***** 2025-12-13 06:54:21,617 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:54:21 +0000 (0:00:00.165) 0:01:17.765 ***** 2025-12-13 06:54:28,087 p=31853 u=zuul n=ansible | changed: [localhost] 2025-12-13 06:54:28,093 p=31853 u=zuul n=ansible | TASK [ci_setup : Add the OC path to cifmw_path if needed cifmw_path={{ cifmw_ci_setup_oc_install_path }}:{{ ansible_env.PATH }}, cacheable=True] *** 2025-12-13 06:54:28,093 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:54:28 +0000 (0:00:06.476) 0:01:24.242 ***** 2025-12-13 06:54:28,093 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:54:28 +0000 (0:00:06.476) 0:01:24.241 ***** 2025-12-13 06:54:28,113 p=31853 u=zuul n=ansible | skipping: [localhost] 2025-12-13 06:54:28,119 p=31853 u=zuul n=ansible | TASK [ci_setup : Create completion file] *************************************** 2025-12-13 06:54:28,119 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:54:28 +0000 (0:00:00.026) 0:01:24.269 ***** 2025-12-13 06:54:28,119 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:54:28 +0000 (0:00:00.026) 0:01:24.268 ***** 2025-12-13 06:54:28,367 p=31853 u=zuul n=ansible | changed: [localhost] 2025-12-13 06:54:28,373 p=31853 u=zuul n=ansible | TASK [ci_setup : Source completion from within .bashrc create=True, mode=0644, path={{ ansible_user_dir }}/.bashrc, block=if [ -f ~/.oc_completion ]; then source ~/.oc_completion fi] *** 2025-12-13 06:54:28,373 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:54:28 +0000 (0:00:00.253) 0:01:24.522 ***** 2025-12-13 06:54:28,373 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:54:28 +0000 (0:00:00.253) 0:01:24.521 ***** 2025-12-13 06:54:28,634 p=31853 u=zuul n=ansible | changed: [localhost] 2025-12-13 06:54:28,640 p=31853 u=zuul n=ansible | TASK [ci_setup : Check rhsm status _raw_params=subscription-manager status] **** 2025-12-13 06:54:28,640 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:54:28 +0000 (0:00:00.267) 0:01:24.789 ***** 2025-12-13 06:54:28,640 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:54:28 +0000 (0:00:00.267) 0:01:24.788 ***** 2025-12-13 06:54:28,653 p=31853 u=zuul n=ansible | skipping: [localhost] 2025-12-13 06:54:28,659 p=31853 u=zuul n=ansible | TASK [ci_setup : Gather the repos to be enabled _repos={{ cifmw_ci_setup_rhel_rhsm_default_repos + (cifmw_ci_setup_rhel_rhsm_extra_repos | default([])) }}] *** 2025-12-13 06:54:28,659 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:54:28 +0000 (0:00:00.018) 0:01:24.808 ***** 2025-12-13 06:54:28,659 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:54:28 +0000 (0:00:00.018) 0:01:24.807 ***** 2025-12-13 06:54:28,671 p=31853 u=zuul n=ansible | skipping: [localhost] 2025-12-13 06:54:28,678 p=31853 u=zuul n=ansible | TASK [ci_setup : Enabling the required repositories. name={{ item }}, state={{ rhsm_repo_state | default('enabled') }}] *** 2025-12-13 06:54:28,678 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:54:28 +0000 (0:00:00.018) 0:01:24.827 ***** 2025-12-13 06:54:28,678 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:54:28 +0000 (0:00:00.018) 0:01:24.826 ***** 2025-12-13 06:54:28,691 p=31853 u=zuul n=ansible | skipping: [localhost] 2025-12-13 06:54:28,697 p=31853 u=zuul n=ansible | TASK [ci_setup : Get current /etc/redhat-release _raw_params=cat /etc/redhat-release] *** 2025-12-13 06:54:28,697 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:54:28 +0000 (0:00:00.019) 0:01:24.847 ***** 2025-12-13 06:54:28,697 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:54:28 +0000 (0:00:00.019) 0:01:24.846 ***** 2025-12-13 06:54:28,710 p=31853 u=zuul n=ansible | skipping: [localhost] 2025-12-13 06:54:28,716 p=31853 u=zuul n=ansible | TASK [ci_setup : Print current /etc/redhat-release msg={{ _current_rh_release.stdout }}] *** 2025-12-13 06:54:28,716 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:54:28 +0000 (0:00:00.018) 0:01:24.866 ***** 2025-12-13 06:54:28,716 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:54:28 +0000 (0:00:00.018) 0:01:24.865 ***** 2025-12-13 06:54:28,728 p=31853 u=zuul n=ansible | skipping: [localhost] 2025-12-13 06:54:28,734 p=31853 u=zuul n=ansible | TASK [ci_setup : Ensure the repos are enabled in the system using yum name={{ item.name }}, baseurl={{ item.baseurl }}, description={{ item.description | default(item.name) }}, gpgcheck={{ item.gpgcheck | default(false) }}, enabled=True, state={{ yum_repo_state | default('present') }}] *** 2025-12-13 06:54:28,735 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:54:28 +0000 (0:00:00.018) 0:01:24.884 ***** 2025-12-13 06:54:28,735 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:54:28 +0000 (0:00:00.018) 0:01:24.883 ***** 2025-12-13 06:54:28,751 p=31853 u=zuul n=ansible | skipping: [localhost] 2025-12-13 06:54:28,758 p=31853 u=zuul n=ansible | TASK [ci_setup : Manage directories path={{ item }}, state={{ directory_state }}, mode=0755, owner={{ ansible_user_id }}, group={{ ansible_user_id }}] *** 2025-12-13 06:54:28,758 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:54:28 +0000 (0:00:00.023) 0:01:24.907 ***** 2025-12-13 06:54:28,758 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:54:28 +0000 (0:00:00.023) 0:01:24.906 ***** 2025-12-13 06:54:28,957 p=31853 u=zuul n=ansible | changed: [localhost] => (item=/home/zuul/ci-framework-data/artifacts/manifests/openstack/cr) 2025-12-13 06:54:29,117 p=31853 u=zuul n=ansible | changed: [localhost] => (item=/home/zuul/ci-framework-data/logs) 2025-12-13 06:54:29,279 p=31853 u=zuul n=ansible | ok: [localhost] => (item=/home/zuul/ci-framework-data/tmp) 2025-12-13 06:54:29,451 p=31853 u=zuul n=ansible | changed: [localhost] => (item=/home/zuul/ci-framework-data/volumes) 2025-12-13 06:54:29,624 p=31853 u=zuul n=ansible | ok: [localhost] => (item=/home/zuul/ci-framework-data/artifacts/parameters) 2025-12-13 06:54:29,635 p=31853 u=zuul n=ansible | TASK [Prepare install_yamls make targets name=install_yamls, apply={'tags': ['bootstrap']}] *** 2025-12-13 06:54:29,635 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:54:29 +0000 (0:00:00.877) 0:01:25.785 ***** 2025-12-13 06:54:29,635 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:54:29 +0000 (0:00:00.877) 0:01:25.784 ***** 2025-12-13 06:54:29,745 p=31853 u=zuul n=ansible | TASK [install_yamls : Ensure directories exist path={{ item }}, state=directory, mode=0755] *** 2025-12-13 06:54:29,745 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:54:29 +0000 (0:00:00.109) 0:01:25.894 ***** 2025-12-13 06:54:29,745 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:54:29 +0000 (0:00:00.109) 0:01:25.893 ***** 2025-12-13 06:54:29,916 p=31853 u=zuul n=ansible | ok: [localhost] => (item=/home/zuul/ci-framework-data/artifacts) 2025-12-13 06:54:30,061 p=31853 u=zuul n=ansible | changed: [localhost] => (item=/home/zuul/ci-framework-data/artifacts/roles/install_yamls_makes/tasks) 2025-12-13 06:54:30,202 p=31853 u=zuul n=ansible | ok: [localhost] => (item=/home/zuul/ci-framework-data/artifacts/parameters) 2025-12-13 06:54:30,209 p=31853 u=zuul n=ansible | TASK [Create variables with local repos based on Zuul items name=install_yamls, tasks_from=zuul_set_operators_repo.yml] *** 2025-12-13 06:54:30,209 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:54:30 +0000 (0:00:00.464) 0:01:26.359 ***** 2025-12-13 06:54:30,209 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:54:30 +0000 (0:00:00.464) 0:01:26.358 ***** 2025-12-13 06:54:30,240 p=31853 u=zuul n=ansible | TASK [install_yamls : Set fact with local repos based on Zuul items cifmw_install_yamls_operators_repo={{ cifmw_install_yamls_operators_repo | default({}) | combine(_repo_operator_info | items2dict) }}] *** 2025-12-13 06:54:30,240 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:54:30 +0000 (0:00:00.030) 0:01:26.389 ***** 2025-12-13 06:54:30,240 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:54:30 +0000 (0:00:00.030) 0:01:26.388 ***** 2025-12-13 06:54:30,280 p=31853 u=zuul n=ansible | ok: [localhost] => (item={'branch': 'main', 'change': '379', 'change_url': 'https://github.com/openstack-k8s-operators/test-operator/pull/379', 'commit_id': 'd19f803f400b92d4afd97dd749e753a7435bfaca', 'patchset': 'd19f803f400b92d4afd97dd749e753a7435bfaca', 'project': {'canonical_hostname': 'github.com', 'canonical_name': 'github.com/openstack-k8s-operators/test-operator', 'name': 'openstack-k8s-operators/test-operator', 'short_name': 'test-operator', 'src_dir': 'src/github.com/openstack-k8s-operators/test-operator'}, 'topic': None}) 2025-12-13 06:54:30,287 p=31853 u=zuul n=ansible | TASK [install_yamls : Print helpful data for debugging msg=_repo_operator_name: {{ _repo_operator_name }} _repo_operator_info: {{ _repo_operator_info }} cifmw_install_yamls_operators_repo: {{ cifmw_install_yamls_operators_repo }} ] *** 2025-12-13 06:54:30,287 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:54:30 +0000 (0:00:00.047) 0:01:26.437 ***** 2025-12-13 06:54:30,287 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:54:30 +0000 (0:00:00.047) 0:01:26.436 ***** 2025-12-13 06:54:30,329 p=31853 u=zuul n=ansible | ok: [localhost] => (item={'branch': 'main', 'change': '379', 'change_url': 'https://github.com/openstack-k8s-operators/test-operator/pull/379', 'commit_id': 'd19f803f400b92d4afd97dd749e753a7435bfaca', 'patchset': 'd19f803f400b92d4afd97dd749e753a7435bfaca', 'project': {'canonical_hostname': 'github.com', 'canonical_name': 'github.com/openstack-k8s-operators/test-operator', 'name': 'openstack-k8s-operators/test-operator', 'short_name': 'test-operator', 'src_dir': 'src/github.com/openstack-k8s-operators/test-operator'}, 'topic': None}) => msg: | _repo_operator_name: test _repo_operator_info: [{'key': 'TEST_REPO', 'value': '/home/zuul/src/github.com/openstack-k8s-operators/test-operator'}, {'key': 'TEST_BRANCH', 'value': ''}] cifmw_install_yamls_operators_repo: {'TEST_REPO': '/home/zuul/src/github.com/openstack-k8s-operators/test-operator', 'TEST_BRANCH': ''} 2025-12-13 06:54:30,341 p=31853 u=zuul n=ansible | TASK [Customize install_yamls devsetup vars if needed name=install_yamls, tasks_from=customize_devsetup_vars.yml] *** 2025-12-13 06:54:30,341 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:54:30 +0000 (0:00:00.053) 0:01:26.490 ***** 2025-12-13 06:54:30,341 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:54:30 +0000 (0:00:00.053) 0:01:26.489 ***** 2025-12-13 06:54:30,376 p=31853 u=zuul n=ansible | TASK [install_yamls : Update opm_version in install_yamls devsetup/vars/default.yaml path={{ cifmw_install_yamls_repo }}/devsetup/vars/default.yaml, regexp=^opm_version:, line=opm_version: {{ cifmw_install_yamls_opm_version }}, state=present] *** 2025-12-13 06:54:30,377 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:54:30 +0000 (0:00:00.035) 0:01:26.526 ***** 2025-12-13 06:54:30,377 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:54:30 +0000 (0:00:00.035) 0:01:26.525 ***** 2025-12-13 06:54:30,393 p=31853 u=zuul n=ansible | skipping: [localhost] 2025-12-13 06:54:30,399 p=31853 u=zuul n=ansible | TASK [install_yamls : Update sdk_version in install_yamls devsetup/vars/default.yaml path={{ cifmw_install_yamls_repo }}/devsetup/vars/default.yaml, regexp=^sdk_version:, line=sdk_version: {{ cifmw_install_yamls_sdk_version }}, state=present] *** 2025-12-13 06:54:30,399 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:54:30 +0000 (0:00:00.022) 0:01:26.549 ***** 2025-12-13 06:54:30,400 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:54:30 +0000 (0:00:00.022) 0:01:26.548 ***** 2025-12-13 06:54:30,416 p=31853 u=zuul n=ansible | skipping: [localhost] 2025-12-13 06:54:30,422 p=31853 u=zuul n=ansible | TASK [install_yamls : Update go_version in install_yamls devsetup/vars/default.yaml path={{ cifmw_install_yamls_repo }}/devsetup/vars/default.yaml, regexp=^go_version:, line=go_version: {{ cifmw_install_yamls_go_version }}, state=present] *** 2025-12-13 06:54:30,422 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:54:30 +0000 (0:00:00.022) 0:01:26.572 ***** 2025-12-13 06:54:30,422 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:54:30 +0000 (0:00:00.022) 0:01:26.571 ***** 2025-12-13 06:54:30,440 p=31853 u=zuul n=ansible | skipping: [localhost] 2025-12-13 06:54:30,448 p=31853 u=zuul n=ansible | TASK [install_yamls : Update kustomize_version in install_yamls devsetup/vars/default.yaml path={{ cifmw_install_yamls_repo }}/devsetup/vars/default.yaml, regexp=^kustomize_version:, line=kustomize_version: {{ cifmw_install_yamls_kustomize_version }}, state=present] *** 2025-12-13 06:54:30,448 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:54:30 +0000 (0:00:00.025) 0:01:26.597 ***** 2025-12-13 06:54:30,448 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:54:30 +0000 (0:00:00.025) 0:01:26.596 ***** 2025-12-13 06:54:30,465 p=31853 u=zuul n=ansible | skipping: [localhost] 2025-12-13 06:54:30,476 p=31853 u=zuul n=ansible | TASK [install_yamls : Compute the cifmw_install_yamls_vars final value _install_yamls_override_vars={{ _install_yamls_override_vars | default({}) | combine(item, recursive=True) }}] *** 2025-12-13 06:54:30,476 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:54:30 +0000 (0:00:00.028) 0:01:26.625 ***** 2025-12-13 06:54:30,476 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:54:30 +0000 (0:00:00.028) 0:01:26.624 ***** 2025-12-13 06:54:30,533 p=31853 u=zuul n=ansible | ok: [localhost] => (item={'BMO_SETUP': False, 'INSTALL_CERT_MANAGER': False}) 2025-12-13 06:54:30,541 p=31853 u=zuul n=ansible | TASK [install_yamls : Set environment override cifmw_install_yamls_environment fact cifmw_install_yamls_environment={{ _install_yamls_override_vars.keys() | map('upper') | zip(_install_yamls_override_vars.values()) | items2dict(key_name=0, value_name=1) | combine({ 'OUT': cifmw_install_yamls_manifests_dir, 'OUTPUT_DIR': cifmw_install_yamls_edpm_dir, 'CHECKOUT_FROM_OPENSTACK_REF': cifmw_install_yamls_checkout_openstack_ref, 'OPENSTACK_K8S_BRANCH': (zuul is defined and not zuul.branch |regex_search('master|antelope|rhos')) | ternary(zuul.branch, 'main') }) | combine(install_yamls_operators_repos) }}, cacheable=True] *** 2025-12-13 06:54:30,541 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:54:30 +0000 (0:00:00.065) 0:01:26.691 ***** 2025-12-13 06:54:30,541 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:54:30 +0000 (0:00:00.065) 0:01:26.689 ***** 2025-12-13 06:54:30,576 p=31853 u=zuul n=ansible | ok: [localhost] 2025-12-13 06:54:30,582 p=31853 u=zuul n=ansible | TASK [install_yamls : Get environment structure base_path={{ cifmw_install_yamls_repo }}] *** 2025-12-13 06:54:30,582 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:54:30 +0000 (0:00:00.041) 0:01:26.732 ***** 2025-12-13 06:54:30,583 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:54:30 +0000 (0:00:00.041) 0:01:26.731 ***** 2025-12-13 06:54:31,055 p=31853 u=zuul n=ansible | ok: [localhost] 2025-12-13 06:54:31,062 p=31853 u=zuul n=ansible | TASK [install_yamls : Ensure Output directory exists path={{ cifmw_install_yamls_out_dir }}, state=directory, mode=0755] *** 2025-12-13 06:54:31,062 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:54:31 +0000 (0:00:00.479) 0:01:27.212 ***** 2025-12-13 06:54:31,062 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:54:31 +0000 (0:00:00.479) 0:01:27.210 ***** 2025-12-13 06:54:31,225 p=31853 u=zuul n=ansible | ok: [localhost] 2025-12-13 06:54:31,232 p=31853 u=zuul n=ansible | TASK [install_yamls : Ensure user cifmw_install_yamls_vars contains existing Makefile variables that=_cifmw_install_yamls_unmatched_vars | length == 0, msg=cifmw_install_yamls_vars contains a variable that is not defined in install_yamls Makefile nor cifmw_install_yamls_whitelisted_vars: {{ _cifmw_install_yamls_unmatched_vars | join(', ')}}, quiet=True] *** 2025-12-13 06:54:31,232 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:54:31 +0000 (0:00:00.170) 0:01:27.382 ***** 2025-12-13 06:54:31,232 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:54:31 +0000 (0:00:00.170) 0:01:27.381 ***** 2025-12-13 06:54:31,262 p=31853 u=zuul n=ansible | ok: [localhost] 2025-12-13 06:54:31,275 p=31853 u=zuul n=ansible | TASK [install_yamls : Generate /home/zuul/ci-framework-data/artifacts/install_yamls.sh dest={{ cifmw_install_yamls_out_dir }}/{{ cifmw_install_yamls_envfile }}, content={% for k,v in cifmw_install_yamls_environment.items() %} export {{ k }}={{ v }} {% endfor %}, mode=0644] *** 2025-12-13 06:54:31,275 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:54:31 +0000 (0:00:00.042) 0:01:27.425 ***** 2025-12-13 06:54:31,275 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:54:31 +0000 (0:00:00.042) 0:01:27.423 ***** 2025-12-13 06:54:31,609 p=31853 u=zuul n=ansible | changed: [localhost] 2025-12-13 06:54:31,615 p=31853 u=zuul n=ansible | TASK [install_yamls : Set install_yamls default values cifmw_install_yamls_defaults={{ get_makefiles_env_output.makefiles_values | combine(cifmw_install_yamls_environment) }}, cacheable=True] *** 2025-12-13 06:54:31,615 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:54:31 +0000 (0:00:00.340) 0:01:27.765 ***** 2025-12-13 06:54:31,615 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:54:31 +0000 (0:00:00.340) 0:01:27.763 ***** 2025-12-13 06:54:31,636 p=31853 u=zuul n=ansible | ok: [localhost] 2025-12-13 06:54:31,642 p=31853 u=zuul n=ansible | TASK [install_yamls : Show the env structure var=cifmw_install_yamls_environment] *** 2025-12-13 06:54:31,642 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:54:31 +0000 (0:00:00.027) 0:01:27.792 ***** 2025-12-13 06:54:31,642 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:54:31 +0000 (0:00:00.027) 0:01:27.791 ***** 2025-12-13 06:54:31,658 p=31853 u=zuul n=ansible | ok: [localhost] => cifmw_install_yamls_environment: BMO_SETUP: false CHECKOUT_FROM_OPENSTACK_REF: 'true' INSTALL_CERT_MANAGER: false OPENSTACK_K8S_BRANCH: main OUT: /home/zuul/ci-framework-data/artifacts/manifests OUTPUT_DIR: /home/zuul/ci-framework-data/artifacts/edpm TEST_BRANCH: '' TEST_REPO: /home/zuul/src/github.com/openstack-k8s-operators/test-operator 2025-12-13 06:54:31,664 p=31853 u=zuul n=ansible | TASK [install_yamls : Show the env structure defaults var=cifmw_install_yamls_defaults] *** 2025-12-13 06:54:31,664 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:54:31 +0000 (0:00:00.022) 0:01:27.814 ***** 2025-12-13 06:54:31,664 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:54:31 +0000 (0:00:00.022) 0:01:27.813 ***** 2025-12-13 06:54:31,688 p=31853 u=zuul n=ansible | ok: [localhost] => cifmw_install_yamls_defaults: ADOPTED_EXTERNAL_NETWORK: 172.21.1.0/24 ADOPTED_INTERNALAPI_NETWORK: 172.17.1.0/24 ADOPTED_STORAGEMGMT_NETWORK: 172.20.1.0/24 ADOPTED_STORAGE_NETWORK: 172.18.1.0/24 ADOPTED_TENANT_NETWORK: 172.9.1.0/24 ANSIBLEEE: config/samples/_v1beta1_ansibleee.yaml ANSIBLEEE_BRANCH: main ANSIBLEEE_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/openstack-ansibleee-operator/config/samples/_v1beta1_ansibleee.yaml ANSIBLEEE_IMG: quay.io/openstack-k8s-operators/openstack-ansibleee-operator-index:latest ANSIBLEEE_KUTTL_CONF: /home/zuul/ci-framework-data/artifacts/manifests/operator/openstack-ansibleee-operator/kuttl-test.yaml ANSIBLEEE_KUTTL_DIR: /home/zuul/ci-framework-data/artifacts/manifests/operator/openstack-ansibleee-operator/test/kuttl/tests ANSIBLEEE_KUTTL_NAMESPACE: ansibleee-kuttl-tests ANSIBLEEE_REPO: https://github.com/openstack-k8s-operators/openstack-ansibleee-operator ANSIBLEE_COMMIT_HASH: '' BARBICAN: config/samples/barbican_v1beta1_barbican.yaml BARBICAN_BRANCH: main BARBICAN_COMMIT_HASH: '' BARBICAN_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/barbican-operator/config/samples/barbican_v1beta1_barbican.yaml BARBICAN_DEPL_IMG: unused BARBICAN_IMG: quay.io/openstack-k8s-operators/barbican-operator-index:latest BARBICAN_KUTTL_CONF: /home/zuul/ci-framework-data/artifacts/manifests/operator/barbican-operator/kuttl-test.yaml BARBICAN_KUTTL_DIR: /home/zuul/ci-framework-data/artifacts/manifests/operator/barbican-operator/test/kuttl/tests BARBICAN_KUTTL_NAMESPACE: barbican-kuttl-tests BARBICAN_REPO: https://github.com/openstack-k8s-operators/barbican-operator.git BARBICAN_SERVICE_ENABLED: 'true' BARBICAN_SIMPLE_CRYPTO_ENCRYPTION_KEY: sEFmdFjDUqRM2VemYslV5yGNWjokioJXsg8Nrlc3drU= BAREMETAL_BRANCH: main BAREMETAL_COMMIT_HASH: '' BAREMETAL_IMG: quay.io/openstack-k8s-operators/openstack-baremetal-operator-index:latest BAREMETAL_OS_CONTAINER_IMG: '' BAREMETAL_OS_IMG: '' BAREMETAL_REPO: https://github.com/openstack-k8s-operators/openstack-baremetal-operator.git BAREMETAL_TIMEOUT: 20m BASH_IMG: quay.io/openstack-k8s-operators/bash:latest BGP_ASN: '64999' BGP_LEAF_1: 100.65.4.1 BGP_LEAF_2: 100.64.4.1 BGP_OVN_ROUTING: 'false' BGP_PEER_ASN: '64999' BGP_SOURCE_IP: 172.30.4.2 BGP_SOURCE_IP6: f00d:f00d:f00d:f00d:f00d:f00d:f00d:42 BMAAS_BRIDGE_IPV4_PREFIX: 172.20.1.2/24 BMAAS_BRIDGE_IPV6_PREFIX: fd00:bbbb::2/64 BMAAS_INSTANCE_DISK_SIZE: '20' BMAAS_INSTANCE_MEMORY: '4096' BMAAS_INSTANCE_NAME_PREFIX: crc-bmaas BMAAS_INSTANCE_NET_MODEL: virtio BMAAS_INSTANCE_OS_VARIANT: centos-stream9 BMAAS_INSTANCE_VCPUS: '2' BMAAS_INSTANCE_VIRT_TYPE: kvm BMAAS_IPV4: 'true' BMAAS_IPV6: 'false' BMAAS_LIBVIRT_USER: sushyemu BMAAS_METALLB_ADDRESS_POOL: 172.20.1.64/26 BMAAS_METALLB_POOL_NAME: baremetal BMAAS_NETWORK_IPV4_PREFIX: 172.20.1.1/24 BMAAS_NETWORK_IPV6_PREFIX: fd00:bbbb::1/64 BMAAS_NETWORK_NAME: crc-bmaas BMAAS_NODE_COUNT: '1' BMAAS_OCP_INSTANCE_NAME: crc BMAAS_REDFISH_PASSWORD: password BMAAS_REDFISH_USERNAME: admin BMAAS_ROUTE_LIBVIRT_NETWORKS: crc-bmaas,crc,default BMAAS_SUSHY_EMULATOR_DRIVER: libvirt BMAAS_SUSHY_EMULATOR_IMAGE: quay.io/metal3-io/sushy-tools:latest BMAAS_SUSHY_EMULATOR_NAMESPACE: sushy-emulator BMAAS_SUSHY_EMULATOR_OS_CLIENT_CONFIG_FILE: /etc/openstack/clouds.yaml BMAAS_SUSHY_EMULATOR_OS_CLOUD: openstack BMH_NAMESPACE: openstack BMO_BRANCH: release-0.9 BMO_CLEANUP: 'true' BMO_COMMIT_HASH: '' BMO_IPA_BRANCH: stable/2024.1 BMO_IRONIC_HOST: 192.168.122.10 BMO_PROVISIONING_INTERFACE: '' BMO_REPO: https://github.com/metal3-io/baremetal-operator BMO_SETUP: false BMO_SETUP_ROUTE_REPLACE: 'true' BM_CTLPLANE_INTERFACE: enp1s0 BM_INSTANCE_MEMORY: '8192' BM_INSTANCE_NAME_PREFIX: edpm-compute-baremetal BM_INSTANCE_NAME_SUFFIX: '0' BM_NETWORK_NAME: default BM_NODE_COUNT: '1' BM_ROOT_PASSWORD: '' BM_ROOT_PASSWORD_SECRET: '' CEILOMETER_CENTRAL_DEPL_IMG: unused CEILOMETER_NOTIFICATION_DEPL_IMG: unused CEPH_BRANCH: release-1.15 CEPH_CLIENT: /home/zuul/ci-framework-data/artifacts/manifests/operator/rook/deploy/examples/toolbox.yaml CEPH_COMMON: /home/zuul/ci-framework-data/artifacts/manifests/operator/rook/deploy/examples/common.yaml CEPH_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/rook/deploy/examples/cluster-test.yaml CEPH_CRDS: /home/zuul/ci-framework-data/artifacts/manifests/operator/rook/deploy/examples/crds.yaml CEPH_IMG: quay.io/ceph/demo:latest-squid CEPH_OP: /home/zuul/ci-framework-data/artifacts/manifests/operator/rook/deploy/examples/operator-openshift.yaml CEPH_REPO: https://github.com/rook/rook.git CERTMANAGER_TIMEOUT: 300s CHECKOUT_FROM_OPENSTACK_REF: 'true' CINDER: config/samples/cinder_v1beta1_cinder.yaml CINDERAPI_DEPL_IMG: unused CINDERBKP_DEPL_IMG: unused CINDERSCH_DEPL_IMG: unused CINDERVOL_DEPL_IMG: unused CINDER_BRANCH: main CINDER_COMMIT_HASH: '' CINDER_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/cinder-operator/config/samples/cinder_v1beta1_cinder.yaml CINDER_IMG: quay.io/openstack-k8s-operators/cinder-operator-index:latest CINDER_KUTTL_CONF: /home/zuul/ci-framework-data/artifacts/manifests/operator/cinder-operator/kuttl-test.yaml CINDER_KUTTL_DIR: /home/zuul/ci-framework-data/artifacts/manifests/operator/cinder-operator/test/kuttl/tests CINDER_KUTTL_NAMESPACE: cinder-kuttl-tests CINDER_REPO: https://github.com/openstack-k8s-operators/cinder-operator.git CLEANUP_DIR_CMD: rm -Rf CRC_BGP_NIC_1_MAC: '52:54:00:11:11:11' CRC_BGP_NIC_2_MAC: '52:54:00:11:11:12' CRC_HTTPS_PROXY: '' CRC_HTTP_PROXY: '' CRC_STORAGE_NAMESPACE: crc-storage CRC_STORAGE_RETRIES: '3' CRC_URL: '''https://developers.redhat.com/content-gateway/rest/mirror/pub/openshift-v4/clients/crc/latest/crc-linux-amd64.tar.xz''' CRC_VERSION: latest DATAPLANE_ANSIBLE_SECRET: dataplane-ansible-ssh-private-key-secret DATAPLANE_ANSIBLE_USER: '' DATAPLANE_COMPUTE_IP: 192.168.122.100 DATAPLANE_CONTAINER_PREFIX: openstack DATAPLANE_CONTAINER_TAG: current-podified DATAPLANE_CUSTOM_SERVICE_RUNNER_IMG: quay.io/openstack-k8s-operators/openstack-ansibleee-runner:latest DATAPLANE_DEFAULT_GW: 192.168.122.1 DATAPLANE_EXTRA_NOVA_CONFIG_FILE: /dev/null DATAPLANE_GROWVOLS_ARGS: /=8GB /tmp=1GB /home=1GB /var=100% DATAPLANE_KUSTOMIZE_SCENARIO: preprovisioned DATAPLANE_NETWORKER_IP: 192.168.122.200 DATAPLANE_NETWORK_INTERFACE_NAME: eth0 DATAPLANE_NOVA_NFS_PATH: '' DATAPLANE_NTP_SERVER: pool.ntp.org DATAPLANE_PLAYBOOK: osp.edpm.download_cache DATAPLANE_REGISTRY_URL: quay.io/podified-antelope-centos9 DATAPLANE_RUNNER_IMG: '' DATAPLANE_SERVER_ROLE: compute DATAPLANE_SSHD_ALLOWED_RANGES: '[''192.168.122.0/24'']' DATAPLANE_TIMEOUT: 30m DATAPLANE_TLS_ENABLED: 'true' DATAPLANE_TOTAL_NETWORKER_NODES: '1' DATAPLANE_TOTAL_NODES: '1' DBSERVICE: galera DESIGNATE: config/samples/designate_v1beta1_designate.yaml DESIGNATE_BRANCH: main DESIGNATE_COMMIT_HASH: '' DESIGNATE_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/designate-operator/config/samples/designate_v1beta1_designate.yaml DESIGNATE_IMG: quay.io/openstack-k8s-operators/designate-operator-index:latest DESIGNATE_KUTTL_CONF: /home/zuul/ci-framework-data/artifacts/manifests/operator/designate-operator/kuttl-test.yaml DESIGNATE_KUTTL_DIR: /home/zuul/ci-framework-data/artifacts/manifests/operator/designate-operator/test/kuttl/tests DESIGNATE_KUTTL_NAMESPACE: designate-kuttl-tests DESIGNATE_REPO: https://github.com/openstack-k8s-operators/designate-operator.git DNSDATA: config/samples/network_v1beta1_dnsdata.yaml DNSDATA_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/infra-operator/config/samples/network_v1beta1_dnsdata.yaml DNSMASQ: config/samples/network_v1beta1_dnsmasq.yaml DNSMASQ_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/infra-operator/config/samples/network_v1beta1_dnsmasq.yaml DNS_DEPL_IMG: unused DNS_DOMAIN: localdomain DOWNLOAD_TOOLS_SELECTION: all EDPM_ATTACH_EXTNET: 'true' EDPM_COMPUTE_ADDITIONAL_HOST_ROUTES: '''[]''' EDPM_COMPUTE_ADDITIONAL_NETWORKS: '''[]''' EDPM_COMPUTE_CELLS: '1' EDPM_COMPUTE_CEPH_ENABLED: 'true' EDPM_COMPUTE_CEPH_NOVA: 'true' EDPM_COMPUTE_DHCP_AGENT_ENABLED: 'true' EDPM_COMPUTE_SRIOV_ENABLED: 'true' EDPM_COMPUTE_SUFFIX: '0' EDPM_CONFIGURE_DEFAULT_ROUTE: 'true' EDPM_CONFIGURE_HUGEPAGES: 'false' EDPM_CONFIGURE_NETWORKING: 'true' EDPM_FIRSTBOOT_EXTRA: /tmp/edpm-firstboot-extra EDPM_NETWORKER_SUFFIX: '0' EDPM_TOTAL_NETWORKERS: '1' EDPM_TOTAL_NODES: '1' GALERA_REPLICAS: '' GENERATE_SSH_KEYS: 'true' GIT_CLONE_OPTS: '' GLANCE: config/samples/glance_v1beta1_glance.yaml GLANCEAPI_DEPL_IMG: unused GLANCE_BRANCH: main GLANCE_COMMIT_HASH: '' GLANCE_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/glance-operator/config/samples/glance_v1beta1_glance.yaml GLANCE_IMG: quay.io/openstack-k8s-operators/glance-operator-index:latest GLANCE_KUTTL_CONF: /home/zuul/ci-framework-data/artifacts/manifests/operator/glance-operator/kuttl-test.yaml GLANCE_KUTTL_DIR: /home/zuul/ci-framework-data/artifacts/manifests/operator/glance-operator/test/kuttl/tests GLANCE_KUTTL_NAMESPACE: glance-kuttl-tests GLANCE_REPO: https://github.com/openstack-k8s-operators/glance-operator.git HEAT: config/samples/heat_v1beta1_heat.yaml HEATAPI_DEPL_IMG: unused HEATCFNAPI_DEPL_IMG: unused HEATENGINE_DEPL_IMG: unused HEAT_AUTH_ENCRYPTION_KEY: 767c3ed056cbaa3b9dfedb8c6f825bf0 HEAT_BRANCH: main HEAT_COMMIT_HASH: '' HEAT_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/heat-operator/config/samples/heat_v1beta1_heat.yaml HEAT_IMG: quay.io/openstack-k8s-operators/heat-operator-index:latest HEAT_KUTTL_CONF: /home/zuul/ci-framework-data/artifacts/manifests/operator/heat-operator/kuttl-test.yaml HEAT_KUTTL_DIR: /home/zuul/ci-framework-data/artifacts/manifests/operator/heat-operator/test/kuttl/tests HEAT_KUTTL_NAMESPACE: heat-kuttl-tests HEAT_REPO: https://github.com/openstack-k8s-operators/heat-operator.git HEAT_SERVICE_ENABLED: 'true' HORIZON: config/samples/horizon_v1beta1_horizon.yaml HORIZON_BRANCH: main HORIZON_COMMIT_HASH: '' HORIZON_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/horizon-operator/config/samples/horizon_v1beta1_horizon.yaml HORIZON_DEPL_IMG: unused HORIZON_IMG: quay.io/openstack-k8s-operators/horizon-operator-index:latest HORIZON_KUTTL_CONF: /home/zuul/ci-framework-data/artifacts/manifests/operator/horizon-operator/kuttl-test.yaml HORIZON_KUTTL_DIR: /home/zuul/ci-framework-data/artifacts/manifests/operator/horizon-operator/test/kuttl/tests HORIZON_KUTTL_NAMESPACE: horizon-kuttl-tests HORIZON_REPO: https://github.com/openstack-k8s-operators/horizon-operator.git INFRA_BRANCH: main INFRA_COMMIT_HASH: '' INFRA_IMG: quay.io/openstack-k8s-operators/infra-operator-index:latest INFRA_KUTTL_CONF: /home/zuul/ci-framework-data/artifacts/manifests/operator/infra-operator/kuttl-test.yaml INFRA_KUTTL_DIR: /home/zuul/ci-framework-data/artifacts/manifests/operator/infra-operator/test/kuttl/tests INFRA_KUTTL_NAMESPACE: infra-kuttl-tests INFRA_REPO: https://github.com/openstack-k8s-operators/infra-operator.git INSTALL_CERT_MANAGER: false INSTALL_NMSTATE: true || false INSTALL_NNCP: true || false INTERNALAPI_HOST_ROUTES: '' IPV6_LAB_IPV4_NETWORK_IPADDRESS: 172.30.0.1/24 IPV6_LAB_IPV6_NETWORK_IPADDRESS: fd00:abcd:abcd:fc00::1/64 IPV6_LAB_LIBVIRT_STORAGE_POOL: default IPV6_LAB_MANAGE_FIREWALLD: 'true' IPV6_LAB_NAT64_HOST_IPV4: 172.30.0.2/24 IPV6_LAB_NAT64_HOST_IPV6: fd00:abcd:abcd:fc00::2/64 IPV6_LAB_NAT64_INSTANCE_NAME: nat64-router IPV6_LAB_NAT64_IPV6_NETWORK: fd00:abcd:abcd:fc00::/64 IPV6_LAB_NAT64_TAYGA_DYNAMIC_POOL: 192.168.255.0/24 IPV6_LAB_NAT64_TAYGA_IPV4: 192.168.255.1 IPV6_LAB_NAT64_TAYGA_IPV6: fd00:abcd:abcd:fc00::3 IPV6_LAB_NAT64_TAYGA_IPV6_PREFIX: fd00:abcd:abcd:fcff::/96 IPV6_LAB_NAT64_UPDATE_PACKAGES: 'false' IPV6_LAB_NETWORK_NAME: nat64 IPV6_LAB_SNO_CLUSTER_NETWORK: fd00:abcd:0::/48 IPV6_LAB_SNO_HOST_IP: fd00:abcd:abcd:fc00::11 IPV6_LAB_SNO_HOST_PREFIX: '64' IPV6_LAB_SNO_INSTANCE_NAME: sno IPV6_LAB_SNO_MACHINE_NETWORK: fd00:abcd:abcd:fc00::/64 IPV6_LAB_SNO_OCP_MIRROR_URL: https://mirror.openshift.com/pub/openshift-v4/clients/ocp IPV6_LAB_SNO_OCP_VERSION: latest-4.14 IPV6_LAB_SNO_SERVICE_NETWORK: fd00:abcd:abcd:fc03::/112 IPV6_LAB_SSH_PUB_KEY: /home/zuul/.ssh/id_rsa.pub IPV6_LAB_WORK_DIR: /home/zuul/.ipv6lab IRONIC: config/samples/ironic_v1beta1_ironic.yaml IRONICAPI_DEPL_IMG: unused IRONICCON_DEPL_IMG: unused IRONICINS_DEPL_IMG: unused IRONICNAG_DEPL_IMG: unused IRONICPXE_DEPL_IMG: unused IRONIC_BRANCH: main IRONIC_COMMIT_HASH: '' IRONIC_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/ironic-operator/config/samples/ironic_v1beta1_ironic.yaml IRONIC_IMAGE: quay.io/metal3-io/ironic IRONIC_IMAGE_TAG: release-24.1 IRONIC_IMG: quay.io/openstack-k8s-operators/ironic-operator-index:latest IRONIC_KUTTL_CONF: /home/zuul/ci-framework-data/artifacts/manifests/operator/ironic-operator/kuttl-test.yaml IRONIC_KUTTL_DIR: /home/zuul/ci-framework-data/artifacts/manifests/operator/ironic-operator/test/kuttl/tests IRONIC_KUTTL_NAMESPACE: ironic-kuttl-tests IRONIC_REPO: https://github.com/openstack-k8s-operators/ironic-operator.git KEYSTONEAPI: config/samples/keystone_v1beta1_keystoneapi.yaml KEYSTONEAPI_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/keystone-operator/config/samples/keystone_v1beta1_keystoneapi.yaml KEYSTONEAPI_DEPL_IMG: unused KEYSTONE_BRANCH: main KEYSTONE_COMMIT_HASH: '' KEYSTONE_FEDERATION_CLIENT_SECRET: COX8bmlKAWn56XCGMrKQJj7dgHNAOl6f KEYSTONE_FEDERATION_CRYPTO_PASSPHRASE: openstack KEYSTONE_IMG: quay.io/openstack-k8s-operators/keystone-operator-index:latest KEYSTONE_KUTTL_CONF: /home/zuul/ci-framework-data/artifacts/manifests/operator/keystone-operator/kuttl-test.yaml KEYSTONE_KUTTL_DIR: /home/zuul/ci-framework-data/artifacts/manifests/operator/keystone-operator/test/kuttl/tests KEYSTONE_KUTTL_NAMESPACE: keystone-kuttl-tests KEYSTONE_REPO: https://github.com/openstack-k8s-operators/keystone-operator.git KUBEADMIN_PWD: '12345678' LIBVIRT_SECRET: libvirt-secret LOKI_DEPLOY_MODE: openshift-network LOKI_DEPLOY_NAMESPACE: netobserv LOKI_DEPLOY_SIZE: 1x.demo LOKI_NAMESPACE: openshift-operators-redhat LOKI_OPERATOR_GROUP: openshift-operators-redhat-loki LOKI_SUBSCRIPTION: loki-operator LVMS_CR: '1' MANILA: config/samples/manila_v1beta1_manila.yaml MANILAAPI_DEPL_IMG: unused MANILASCH_DEPL_IMG: unused MANILASHARE_DEPL_IMG: unused MANILA_BRANCH: main MANILA_COMMIT_HASH: '' MANILA_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/manila-operator/config/samples/manila_v1beta1_manila.yaml MANILA_IMG: quay.io/openstack-k8s-operators/manila-operator-index:latest MANILA_KUTTL_CONF: /home/zuul/ci-framework-data/artifacts/manifests/operator/manila-operator/kuttl-test.yaml MANILA_KUTTL_DIR: /home/zuul/ci-framework-data/artifacts/manifests/operator/manila-operator/test/kuttl/tests MANILA_KUTTL_NAMESPACE: manila-kuttl-tests MANILA_REPO: https://github.com/openstack-k8s-operators/manila-operator.git MANILA_SERVICE_ENABLED: 'true' MARIADB: config/samples/mariadb_v1beta1_galera.yaml MARIADB_BRANCH: main MARIADB_CHAINSAW_CONF: /home/zuul/ci-framework-data/artifacts/manifests/operator/mariadb-operator/test/chainsaw/config.yaml MARIADB_CHAINSAW_DIR: /home/zuul/ci-framework-data/artifacts/manifests/operator/mariadb-operator/test/chainsaw/tests MARIADB_CHAINSAW_NAMESPACE: mariadb-chainsaw-tests MARIADB_COMMIT_HASH: '' MARIADB_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/mariadb-operator/config/samples/mariadb_v1beta1_galera.yaml MARIADB_DEPL_IMG: unused MARIADB_IMG: quay.io/openstack-k8s-operators/mariadb-operator-index:latest MARIADB_KUTTL_CONF: /home/zuul/ci-framework-data/artifacts/manifests/operator/mariadb-operator/kuttl-test.yaml MARIADB_KUTTL_DIR: /home/zuul/ci-framework-data/artifacts/manifests/operator/mariadb-operator/test/kuttl/tests MARIADB_KUTTL_NAMESPACE: mariadb-kuttl-tests MARIADB_REPO: https://github.com/openstack-k8s-operators/mariadb-operator.git MEMCACHED: config/samples/memcached_v1beta1_memcached.yaml MEMCACHED_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/infra-operator/config/samples/memcached_v1beta1_memcached.yaml MEMCACHED_DEPL_IMG: unused METADATA_SHARED_SECRET: '1234567842' METALLB_IPV6_POOL: fd00:aaaa::80-fd00:aaaa::90 METALLB_POOL: 192.168.122.80-192.168.122.90 MICROSHIFT: '0' NAMESPACE: openstack NETCONFIG: config/samples/network_v1beta1_netconfig.yaml NETCONFIG_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/infra-operator/config/samples/network_v1beta1_netconfig.yaml NETCONFIG_DEPL_IMG: unused NETOBSERV_DEPLOY_NAMESPACE: netobserv NETOBSERV_NAMESPACE: openshift-netobserv-operator NETOBSERV_OPERATOR_GROUP: openshift-netobserv-operator-net NETOBSERV_SUBSCRIPTION: netobserv-operator NETWORK_BGP: 'false' NETWORK_DESIGNATE_ADDRESS_PREFIX: 172.28.0 NETWORK_DESIGNATE_EXT_ADDRESS_PREFIX: 172.50.0 NETWORK_INTERNALAPI_ADDRESS_PREFIX: 172.17.0 NETWORK_ISOLATION: 'true' NETWORK_ISOLATION_INSTANCE_NAME: crc NETWORK_ISOLATION_IPV4: 'true' NETWORK_ISOLATION_IPV4_ADDRESS: 172.16.1.1/24 NETWORK_ISOLATION_IPV4_NAT: 'true' NETWORK_ISOLATION_IPV6: 'false' NETWORK_ISOLATION_IPV6_ADDRESS: fd00:aaaa::1/64 NETWORK_ISOLATION_IP_ADDRESS: 192.168.122.10 NETWORK_ISOLATION_MAC: '52:54:00:11:11:10' NETWORK_ISOLATION_NETWORK_NAME: net-iso NETWORK_ISOLATION_NET_NAME: default NETWORK_ISOLATION_USE_DEFAULT_NETWORK: 'true' NETWORK_MTU: '1500' NETWORK_STORAGEMGMT_ADDRESS_PREFIX: 172.20.0 NETWORK_STORAGE_ADDRESS_PREFIX: 172.18.0 NETWORK_STORAGE_MACVLAN: '' NETWORK_TENANT_ADDRESS_PREFIX: 172.19.0 NETWORK_VLAN_START: '20' NETWORK_VLAN_STEP: '1' NEUTRONAPI: config/samples/neutron_v1beta1_neutronapi.yaml NEUTRONAPI_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/neutron-operator/config/samples/neutron_v1beta1_neutronapi.yaml NEUTRONAPI_DEPL_IMG: unused NEUTRON_BRANCH: main NEUTRON_COMMIT_HASH: '' NEUTRON_IMG: quay.io/openstack-k8s-operators/neutron-operator-index:latest NEUTRON_KUTTL_CONF: /home/zuul/ci-framework-data/artifacts/manifests/operator/neutron-operator/kuttl-test.yaml NEUTRON_KUTTL_DIR: /home/zuul/ci-framework-data/artifacts/manifests/operator/neutron-operator/test/kuttl/tests NEUTRON_KUTTL_NAMESPACE: neutron-kuttl-tests NEUTRON_REPO: https://github.com/openstack-k8s-operators/neutron-operator.git NFS_HOME: /home/nfs NMSTATE_NAMESPACE: openshift-nmstate NMSTATE_OPERATOR_GROUP: openshift-nmstate-tn6k8 NMSTATE_SUBSCRIPTION: kubernetes-nmstate-operator NNCP_ADDITIONAL_HOST_ROUTES: '' NNCP_BGP_1_INTERFACE: enp7s0 NNCP_BGP_1_IP_ADDRESS: 100.65.4.2 NNCP_BGP_2_INTERFACE: enp8s0 NNCP_BGP_2_IP_ADDRESS: 100.64.4.2 NNCP_BRIDGE: ospbr NNCP_CLEANUP_TIMEOUT: 120s NNCP_CTLPLANE_IPV6_ADDRESS_PREFIX: 'fd00:aaaa::' NNCP_CTLPLANE_IPV6_ADDRESS_SUFFIX: '10' NNCP_CTLPLANE_IP_ADDRESS_PREFIX: 192.168.122 NNCP_CTLPLANE_IP_ADDRESS_SUFFIX: '10' NNCP_DNS_SERVER: 192.168.122.1 NNCP_DNS_SERVER_IPV6: fd00:aaaa::1 NNCP_GATEWAY: 192.168.122.1 NNCP_GATEWAY_IPV6: fd00:aaaa::1 NNCP_INTERFACE: enp6s0 NNCP_NODES: '' NNCP_TIMEOUT: 240s NOVA: config/samples/nova_v1beta1_nova_collapsed_cell.yaml NOVA_BRANCH: main NOVA_COMMIT_HASH: '' NOVA_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/nova-operator/config/samples/nova_v1beta1_nova_collapsed_cell.yaml NOVA_IMG: quay.io/openstack-k8s-operators/nova-operator-index:latest NOVA_REPO: https://github.com/openstack-k8s-operators/nova-operator.git NUMBER_OF_INSTANCES: '1' OCP_NETWORK_NAME: crc OCTAVIA: config/samples/octavia_v1beta1_octavia.yaml OCTAVIA_BRANCH: main OCTAVIA_COMMIT_HASH: '' OCTAVIA_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/octavia-operator/config/samples/octavia_v1beta1_octavia.yaml OCTAVIA_IMG: quay.io/openstack-k8s-operators/octavia-operator-index:latest OCTAVIA_KUTTL_CONF: /home/zuul/ci-framework-data/artifacts/manifests/operator/octavia-operator/kuttl-test.yaml OCTAVIA_KUTTL_DIR: /home/zuul/ci-framework-data/artifacts/manifests/operator/octavia-operator/test/kuttl/tests OCTAVIA_KUTTL_NAMESPACE: octavia-kuttl-tests OCTAVIA_REPO: https://github.com/openstack-k8s-operators/octavia-operator.git OKD: 'false' OPENSTACK_BRANCH: main OPENSTACK_BUNDLE_IMG: quay.io/openstack-k8s-operators/openstack-operator-bundle:latest OPENSTACK_COMMIT_HASH: '' OPENSTACK_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/openstack-operator/config/samples/core_v1beta1_openstackcontrolplane_galera_network_isolation.yaml OPENSTACK_CRDS_DIR: openstack_crds OPENSTACK_CTLPLANE: config/samples/core_v1beta1_openstackcontrolplane_galera_network_isolation.yaml OPENSTACK_IMG: quay.io/openstack-k8s-operators/openstack-operator-index:latest OPENSTACK_K8S_BRANCH: main OPENSTACK_K8S_TAG: latest OPENSTACK_KUTTL_CONF: /home/zuul/ci-framework-data/artifacts/manifests/operator/openstack-operator/kuttl-test.yaml OPENSTACK_KUTTL_DIR: /home/zuul/ci-framework-data/artifacts/manifests/operator/openstack-operator/test/kuttl/tests OPENSTACK_KUTTL_NAMESPACE: openstack-kuttl-tests OPENSTACK_NEUTRON_CUSTOM_CONF: '' OPENSTACK_REPO: https://github.com/openstack-k8s-operators/openstack-operator.git OPENSTACK_STORAGE_BUNDLE_IMG: quay.io/openstack-k8s-operators/openstack-operator-storage-bundle:latest OPERATOR_BASE_DIR: /home/zuul/ci-framework-data/artifacts/manifests/operator OPERATOR_CHANNEL: '' OPERATOR_NAMESPACE: openstack-operators OPERATOR_SOURCE: '' OPERATOR_SOURCE_NAMESPACE: '' OUT: /home/zuul/ci-framework-data/artifacts/manifests OUTPUT_DIR: /home/zuul/ci-framework-data/artifacts/edpm OVNCONTROLLER: config/samples/ovn_v1beta1_ovncontroller.yaml OVNCONTROLLER_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/ovn-operator/config/samples/ovn_v1beta1_ovncontroller.yaml OVNCONTROLLER_NMAP: 'true' OVNDBS: config/samples/ovn_v1beta1_ovndbcluster.yaml OVNDBS_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/ovn-operator/config/samples/ovn_v1beta1_ovndbcluster.yaml OVNNORTHD: config/samples/ovn_v1beta1_ovnnorthd.yaml OVNNORTHD_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/ovn-operator/config/samples/ovn_v1beta1_ovnnorthd.yaml OVN_BRANCH: main OVN_COMMIT_HASH: '' OVN_IMG: quay.io/openstack-k8s-operators/ovn-operator-index:latest OVN_KUTTL_CONF: /home/zuul/ci-framework-data/artifacts/manifests/operator/ovn-operator/kuttl-test.yaml OVN_KUTTL_DIR: /home/zuul/ci-framework-data/artifacts/manifests/operator/ovn-operator/test/kuttl/tests OVN_KUTTL_NAMESPACE: ovn-kuttl-tests OVN_REPO: https://github.com/openstack-k8s-operators/ovn-operator.git PASSWORD: '12345678' PLACEMENTAPI: config/samples/placement_v1beta1_placementapi.yaml PLACEMENTAPI_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/placement-operator/config/samples/placement_v1beta1_placementapi.yaml PLACEMENTAPI_DEPL_IMG: unused PLACEMENT_BRANCH: main PLACEMENT_COMMIT_HASH: '' PLACEMENT_IMG: quay.io/openstack-k8s-operators/placement-operator-index:latest PLACEMENT_KUTTL_CONF: /home/zuul/ci-framework-data/artifacts/manifests/operator/placement-operator/kuttl-test.yaml PLACEMENT_KUTTL_DIR: /home/zuul/ci-framework-data/artifacts/manifests/operator/placement-operator/test/kuttl/tests PLACEMENT_KUTTL_NAMESPACE: placement-kuttl-tests PLACEMENT_REPO: https://github.com/openstack-k8s-operators/placement-operator.git PULL_SECRET: /home/zuul/src/github.com/openstack-k8s-operators/ci-framework/pull-secret.txt RABBITMQ: docs/examples/default-security-context/rabbitmq.yaml RABBITMQ_BRANCH: patches RABBITMQ_COMMIT_HASH: '' RABBITMQ_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/rabbitmq-operator/docs/examples/default-security-context/rabbitmq.yaml RABBITMQ_DEPL_IMG: unused RABBITMQ_IMG: quay.io/openstack-k8s-operators/rabbitmq-cluster-operator-index:latest RABBITMQ_REPO: https://github.com/openstack-k8s-operators/rabbitmq-cluster-operator.git REDHAT_OPERATORS: 'false' REDIS: config/samples/redis_v1beta1_redis.yaml REDIS_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/infra-operator-redis/config/samples/redis_v1beta1_redis.yaml REDIS_DEPL_IMG: unused RH_REGISTRY_PWD: '' RH_REGISTRY_USER: '' SECRET: osp-secret SG_CORE_DEPL_IMG: unused STANDALONE_COMPUTE_DRIVER: libvirt STANDALONE_EXTERNAL_NET_PREFFIX: 172.21.0 STANDALONE_INTERNALAPI_NET_PREFIX: 172.17.0 STANDALONE_STORAGEMGMT_NET_PREFIX: 172.20.0 STANDALONE_STORAGE_NET_PREFIX: 172.18.0 STANDALONE_TENANT_NET_PREFIX: 172.19.0 STORAGEMGMT_HOST_ROUTES: '' STORAGE_CLASS: local-storage STORAGE_HOST_ROUTES: '' SWIFT: config/samples/swift_v1beta1_swift.yaml SWIFT_BRANCH: main SWIFT_COMMIT_HASH: '' SWIFT_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/swift-operator/config/samples/swift_v1beta1_swift.yaml SWIFT_IMG: quay.io/openstack-k8s-operators/swift-operator-index:latest SWIFT_KUTTL_CONF: /home/zuul/ci-framework-data/artifacts/manifests/operator/swift-operator/kuttl-test.yaml SWIFT_KUTTL_DIR: /home/zuul/ci-framework-data/artifacts/manifests/operator/swift-operator/test/kuttl/tests SWIFT_KUTTL_NAMESPACE: swift-kuttl-tests SWIFT_REPO: https://github.com/openstack-k8s-operators/swift-operator.git TELEMETRY: config/samples/telemetry_v1beta1_telemetry.yaml TELEMETRY_BRANCH: main TELEMETRY_COMMIT_HASH: '' TELEMETRY_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/telemetry-operator/config/samples/telemetry_v1beta1_telemetry.yaml TELEMETRY_IMG: quay.io/openstack-k8s-operators/telemetry-operator-index:latest TELEMETRY_KUTTL_BASEDIR: /home/zuul/ci-framework-data/artifacts/manifests/operator/telemetry-operator TELEMETRY_KUTTL_CONF: /home/zuul/ci-framework-data/artifacts/manifests/operator/telemetry-operator/kuttl-test.yaml TELEMETRY_KUTTL_DIR: /home/zuul/ci-framework-data/artifacts/manifests/operator/telemetry-operator/test/kuttl/suites TELEMETRY_KUTTL_NAMESPACE: telemetry-kuttl-tests TELEMETRY_KUTTL_RELPATH: test/kuttl/suites TELEMETRY_REPO: https://github.com/openstack-k8s-operators/telemetry-operator.git TENANT_HOST_ROUTES: '' TEST_BRANCH: '' TEST_REPO: /home/zuul/src/github.com/openstack-k8s-operators/test-operator TIMEOUT: 300s TLS_ENABLED: 'false' tripleo_deploy: 'export REGISTRY_PWD:' 2025-12-13 06:54:31,695 p=31853 u=zuul n=ansible | TASK [install_yamls : Generate make targets install_yamls_path={{ cifmw_install_yamls_repo }}, output_directory={{ cifmw_install_yamls_tasks_out }}] *** 2025-12-13 06:54:31,695 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:54:31 +0000 (0:00:00.030) 0:01:27.845 ***** 2025-12-13 06:54:31,695 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:54:31 +0000 (0:00:00.030) 0:01:27.843 ***** 2025-12-13 06:54:31,976 p=31853 u=zuul n=ansible | ok: [localhost] 2025-12-13 06:54:31,983 p=31853 u=zuul n=ansible | TASK [install_yamls : Debug generate_make module var=cifmw_generate_makes] ***** 2025-12-13 06:54:31,983 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:54:31 +0000 (0:00:00.287) 0:01:28.132 ***** 2025-12-13 06:54:31,983 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:54:31 +0000 (0:00:00.287) 0:01:28.131 ***** 2025-12-13 06:54:32,000 p=31853 u=zuul n=ansible | ok: [localhost] => cifmw_generate_makes: changed: false debug: /home/zuul/src/github.com/openstack-k8s-operators/install_yamls/Makefile: - all - help - cleanup - deploy_cleanup - wait - crc_storage - crc_storage_cleanup - crc_storage_release - crc_storage_with_retries - crc_storage_cleanup_with_retries - operator_namespace - namespace - namespace_cleanup - input - input_cleanup - crc_bmo_setup - crc_bmo_cleanup - openstack_prep - openstack - openstack_wait - openstack_init - openstack_cleanup - openstack_repo - openstack_deploy_prep - openstack_deploy - openstack_wait_deploy - openstack_deploy_cleanup - openstack_update_run - update_services - update_system - openstack_patch_version - edpm_deploy_generate_keys - edpm_patch_ansible_runner_image - edpm_deploy_prep - edpm_deploy_cleanup - edpm_deploy - edpm_deploy_baremetal_prep - edpm_deploy_baremetal - edpm_wait_deploy_baremetal - edpm_wait_deploy - edpm_register_dns - edpm_nova_discover_hosts - openstack_crds - openstack_crds_cleanup - edpm_deploy_networker_prep - edpm_deploy_networker_cleanup - edpm_deploy_networker - infra_prep - infra - infra_cleanup - dns_deploy_prep - dns_deploy - dns_deploy_cleanup - netconfig_deploy_prep - netconfig_deploy - netconfig_deploy_cleanup - memcached_deploy_prep - memcached_deploy - memcached_deploy_cleanup - keystone_prep - keystone - keystone_cleanup - keystone_deploy_prep - keystone_deploy - keystone_deploy_cleanup - barbican_prep - barbican - barbican_cleanup - barbican_deploy_prep - barbican_deploy - barbican_deploy_validate - barbican_deploy_cleanup - mariadb - mariadb_cleanup - mariadb_deploy_prep - mariadb_deploy - mariadb_deploy_cleanup - placement_prep - placement - placement_cleanup - placement_deploy_prep - placement_deploy - placement_deploy_cleanup - glance_prep - glance - glance_cleanup - glance_deploy_prep - glance_deploy - glance_deploy_cleanup - ovn_prep - ovn - ovn_cleanup - ovn_deploy_prep - ovn_deploy - ovn_deploy_cleanup - neutron_prep - neutron - neutron_cleanup - neutron_deploy_prep - neutron_deploy - neutron_deploy_cleanup - cinder_prep - cinder - cinder_cleanup - cinder_deploy_prep - cinder_deploy - cinder_deploy_cleanup - rabbitmq_prep - rabbitmq - rabbitmq_cleanup - rabbitmq_deploy_prep - rabbitmq_deploy - rabbitmq_deploy_cleanup - ironic_prep - ironic - ironic_cleanup - ironic_deploy_prep - ironic_deploy - ironic_deploy_cleanup - octavia_prep - octavia - octavia_cleanup - octavia_deploy_prep - octavia_deploy - octavia_deploy_cleanup - designate_prep - designate - designate_cleanup - designate_deploy_prep - designate_deploy - designate_deploy_cleanup - nova_prep - nova - nova_cleanup - nova_deploy_prep - nova_deploy - nova_deploy_cleanup - mariadb_kuttl_run - mariadb_kuttl - kuttl_db_prep - kuttl_db_cleanup - kuttl_common_prep - kuttl_common_cleanup - keystone_kuttl_run - keystone_kuttl - barbican_kuttl_run - barbican_kuttl - placement_kuttl_run - placement_kuttl - cinder_kuttl_run - cinder_kuttl - neutron_kuttl_run - neutron_kuttl - octavia_kuttl_run - octavia_kuttl - designate_kuttl - designate_kuttl_run - ovn_kuttl_run - ovn_kuttl - infra_kuttl_run - infra_kuttl - ironic_kuttl_run - ironic_kuttl - ironic_kuttl_crc - heat_kuttl_run - heat_kuttl - heat_kuttl_crc - ansibleee_kuttl_run - ansibleee_kuttl_cleanup - ansibleee_kuttl_prep - ansibleee_kuttl - glance_kuttl_run - glance_kuttl - manila_kuttl_run - manila_kuttl - swift_kuttl_run - swift_kuttl - horizon_kuttl_run - horizon_kuttl - openstack_kuttl_run - openstack_kuttl - mariadb_chainsaw_run - mariadb_chainsaw - horizon_prep - horizon - horizon_cleanup - horizon_deploy_prep - horizon_deploy - horizon_deploy_cleanup - heat_prep - heat - heat_cleanup - heat_deploy_prep - heat_deploy - heat_deploy_cleanup - ansibleee_prep - ansibleee - ansibleee_cleanup - baremetal_prep - baremetal - baremetal_cleanup - ceph_help - ceph - ceph_cleanup - rook_prep - rook - rook_deploy_prep - rook_deploy - rook_crc_disk - rook_cleanup - lvms - nmstate - nncp - nncp_cleanup - netattach - netattach_cleanup - metallb - metallb_config - metallb_config_cleanup - metallb_cleanup - loki - loki_cleanup - loki_deploy - loki_deploy_cleanup - netobserv - netobserv_cleanup - netobserv_deploy - netobserv_deploy_cleanup - manila_prep - manila - manila_cleanup - manila_deploy_prep - manila_deploy - manila_deploy_cleanup - telemetry_prep - telemetry - telemetry_cleanup - telemetry_deploy_prep - telemetry_deploy - telemetry_deploy_cleanup - telemetry_kuttl_run - telemetry_kuttl - swift_prep - swift - swift_cleanup - swift_deploy_prep - swift_deploy - swift_deploy_cleanup - certmanager - certmanager_cleanup - validate_marketplace - redis_deploy_prep - redis_deploy - redis_deploy_cleanup - set_slower_etcd_profile /home/zuul/src/github.com/openstack-k8s-operators/install_yamls/devsetup/Makefile: - help - download_tools - nfs - nfs_cleanup - crc - crc_cleanup - crc_scrub - crc_attach_default_interface - crc_attach_default_interface_cleanup - ipv6_lab_network - ipv6_lab_network_cleanup - ipv6_lab_nat64_router - ipv6_lab_nat64_router_cleanup - ipv6_lab_sno - ipv6_lab_sno_cleanup - ipv6_lab - ipv6_lab_cleanup - attach_default_interface - attach_default_interface_cleanup - network_isolation_bridge - network_isolation_bridge_cleanup - edpm_baremetal_compute - edpm_compute - edpm_compute_bootc - edpm_ansible_runner - edpm_computes_bgp - edpm_compute_repos - edpm_compute_cleanup - edpm_networker - edpm_networker_cleanup - edpm_deploy_instance - tripleo_deploy - standalone_deploy - standalone_sync - standalone - standalone_cleanup - standalone_snapshot - standalone_revert - cifmw_prepare - cifmw_cleanup - bmaas_network - bmaas_network_cleanup - bmaas_route_crc_and_crc_bmaas_networks - bmaas_route_crc_and_crc_bmaas_networks_cleanup - bmaas_crc_attach_network - bmaas_crc_attach_network_cleanup - bmaas_crc_baremetal_bridge - bmaas_crc_baremetal_bridge_cleanup - bmaas_baremetal_net_nad - bmaas_baremetal_net_nad_cleanup - bmaas_metallb - bmaas_metallb_cleanup - bmaas_virtual_bms - bmaas_virtual_bms_cleanup - bmaas_sushy_emulator - bmaas_sushy_emulator_cleanup - bmaas_sushy_emulator_wait - bmaas_generate_nodes_yaml - bmaas - bmaas_cleanup failed: false success: true 2025-12-13 06:54:32,006 p=31853 u=zuul n=ansible | TASK [install_yamls : Create the install_yamls parameters file dest={{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts/parameters/install-yamls-params.yml, content={{ { 'cifmw_install_yamls_environment': cifmw_install_yamls_environment, 'cifmw_install_yamls_defaults': cifmw_install_yamls_defaults } | to_nice_yaml }}, mode=0644] *** 2025-12-13 06:54:32,007 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:54:32 +0000 (0:00:00.023) 0:01:28.156 ***** 2025-12-13 06:54:32,007 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:54:32 +0000 (0:00:00.023) 0:01:28.155 ***** 2025-12-13 06:54:32,366 p=31853 u=zuul n=ansible | changed: [localhost] 2025-12-13 06:54:32,372 p=31853 u=zuul n=ansible | TASK [install_yamls : Create empty cifmw_install_yamls_environment if needed cifmw_install_yamls_environment={}] *** 2025-12-13 06:54:32,372 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:54:32 +0000 (0:00:00.365) 0:01:28.522 ***** 2025-12-13 06:54:32,372 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:54:32 +0000 (0:00:00.365) 0:01:28.520 ***** 2025-12-13 06:54:32,390 p=31853 u=zuul n=ansible | skipping: [localhost] 2025-12-13 06:54:32,403 p=31853 u=zuul n=ansible | TASK [discover_latest_image : Get latest image url={{ cifmw_discover_latest_image_base_url }}, image_prefix={{ cifmw_discover_latest_image_qcow_prefix }}, images_file={{ cifmw_discover_latest_image_images_file }}] *** 2025-12-13 06:54:32,403 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:54:32 +0000 (0:00:00.031) 0:01:28.553 ***** 2025-12-13 06:54:32,403 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:54:32 +0000 (0:00:00.031) 0:01:28.552 ***** 2025-12-13 06:54:33,754 p=31853 u=zuul n=ansible | changed: [localhost] 2025-12-13 06:54:33,761 p=31853 u=zuul n=ansible | TASK [discover_latest_image : Export facts accordingly cifmw_discovered_image_name={{ discovered_image['data']['image_name'] }}, cifmw_discovered_image_url={{ discovered_image['data']['image_url'] }}, cifmw_discovered_hash={{ discovered_image['data']['hash'] }}, cifmw_discovered_hash_algorithm={{ discovered_image['data']['hash_algorithm'] }}, cacheable=True] *** 2025-12-13 06:54:33,761 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:54:33 +0000 (0:00:01.357) 0:01:29.911 ***** 2025-12-13 06:54:33,761 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:54:33 +0000 (0:00:01.357) 0:01:29.910 ***** 2025-12-13 06:54:33,786 p=31853 u=zuul n=ansible | ok: [localhost] 2025-12-13 06:54:33,797 p=31853 u=zuul n=ansible | TASK [cifmw_setup : Create artifacts with custom params mode=0644, dest={{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts/parameters/custom-params.yml, content={{ ci_framework_params | to_nice_yaml }}] *** 2025-12-13 06:54:33,797 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:54:33 +0000 (0:00:00.035) 0:01:29.947 ***** 2025-12-13 06:54:33,797 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:54:33 +0000 (0:00:00.035) 0:01:29.946 ***** 2025-12-13 06:54:34,151 p=31853 u=zuul n=ansible | changed: [localhost] 2025-12-13 06:54:34,164 p=31853 u=zuul n=ansible | TASK [run_hook : Assert parameters are valid quiet=True, that=['_list_hooks is not string', '_list_hooks is not mapping', '_list_hooks is iterable', '(hooks | default([])) is not string', '(hooks | default([])) is not mapping', '(hooks | default([])) is iterable']] *** 2025-12-13 06:54:34,164 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:54:34 +0000 (0:00:00.366) 0:01:30.313 ***** 2025-12-13 06:54:34,164 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:54:34 +0000 (0:00:00.366) 0:01:30.312 ***** 2025-12-13 06:54:34,213 p=31853 u=zuul n=ansible | ok: [localhost] 2025-12-13 06:54:34,219 p=31853 u=zuul n=ansible | TASK [run_hook : Assert single hooks are all mappings quiet=True, that=['_not_mapping_hooks | length == 0'], msg=All single hooks must be a list of mappings or a mapping.] *** 2025-12-13 06:54:34,219 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:54:34 +0000 (0:00:00.055) 0:01:30.369 ***** 2025-12-13 06:54:34,219 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:54:34 +0000 (0:00:00.055) 0:01:30.367 ***** 2025-12-13 06:54:34,285 p=31853 u=zuul n=ansible | ok: [localhost] 2025-12-13 06:54:34,292 p=31853 u=zuul n=ansible | TASK [run_hook : Loop on hooks for pre_infra _raw_params={{ hook.type }}.yml] *** 2025-12-13 06:54:34,292 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:54:34 +0000 (0:00:00.073) 0:01:30.442 ***** 2025-12-13 06:54:34,292 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:54:34 +0000 (0:00:00.073) 0:01:30.441 ***** 2025-12-13 06:54:34,379 p=31853 u=zuul n=ansible | included: /home/zuul/src/github.com/openstack-k8s-operators/ci-framework/roles/run_hook/tasks/playbook.yml for localhost => (item={'name': 'Download needed tools', 'inventory': 'localhost,', 'connection': 'local', 'type': 'playbook', 'source': '/home/zuul/src/github.com/openstack-k8s-operators/install_yamls/devsetup/download_tools.yaml'}) 2025-12-13 06:54:34,388 p=31853 u=zuul n=ansible | TASK [run_hook : Set playbook path for Download needed tools cifmw_basedir={{ _bdir }}, hook_name={{ _hook_name }}, playbook_path={{ _play | realpath }}, log_path={{ _bdir }}/logs/{{ step }}_{{ _hook_name }}.log, extra_vars=-e operator_namespace={{ _operator_namespace }} -e namespace={{ _namespace}} {%- if hook.extra_vars is defined and hook.extra_vars|length > 0 -%} {% for key,value in hook.extra_vars.items() -%} {%- if key == 'file' %} -e "@{{ value }}" {%- else %} -e "{{ key }}={{ value }}" {%- endif %} {%- endfor %} {%- endif %}] *** 2025-12-13 06:54:34,388 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:54:34 +0000 (0:00:00.095) 0:01:30.538 ***** 2025-12-13 06:54:34,388 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:54:34 +0000 (0:00:00.095) 0:01:30.537 ***** 2025-12-13 06:54:34,428 p=31853 u=zuul n=ansible | ok: [localhost] 2025-12-13 06:54:34,435 p=31853 u=zuul n=ansible | TASK [run_hook : Get file stat path={{ playbook_path }}] *********************** 2025-12-13 06:54:34,435 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:54:34 +0000 (0:00:00.046) 0:01:30.585 ***** 2025-12-13 06:54:34,435 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:54:34 +0000 (0:00:00.046) 0:01:30.583 ***** 2025-12-13 06:54:34,596 p=31853 u=zuul n=ansible | ok: [localhost] 2025-12-13 06:54:34,603 p=31853 u=zuul n=ansible | TASK [run_hook : Fail if playbook doesn't exist msg=Playbook {{ playbook_path }} doesn't seem to exist.] *** 2025-12-13 06:54:34,604 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:54:34 +0000 (0:00:00.168) 0:01:30.753 ***** 2025-12-13 06:54:34,604 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:54:34 +0000 (0:00:00.168) 0:01:30.752 ***** 2025-12-13 06:54:34,616 p=31853 u=zuul n=ansible | skipping: [localhost] 2025-12-13 06:54:34,624 p=31853 u=zuul n=ansible | TASK [run_hook : Get parameters files paths={{ (cifmw_basedir, 'artifacts/parameters') | path_join }}, file_type=file, patterns=*.yml] *** 2025-12-13 06:54:34,624 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:54:34 +0000 (0:00:00.020) 0:01:30.773 ***** 2025-12-13 06:54:34,624 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:54:34 +0000 (0:00:00.020) 0:01:30.772 ***** 2025-12-13 06:54:34,777 p=31853 u=zuul n=ansible | ok: [localhost] 2025-12-13 06:54:34,785 p=31853 u=zuul n=ansible | TASK [run_hook : Add parameters artifacts as extra variables extra_vars={{ extra_vars }} {% for file in cifmw_run_hook_parameters_files.files %} -e "@{{ file.path }}" {%- endfor %}] *** 2025-12-13 06:54:34,785 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:54:34 +0000 (0:00:00.161) 0:01:30.935 ***** 2025-12-13 06:54:34,785 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:54:34 +0000 (0:00:00.161) 0:01:30.933 ***** 2025-12-13 06:54:34,800 p=31853 u=zuul n=ansible | ok: [localhost] 2025-12-13 06:54:34,808 p=31853 u=zuul n=ansible | TASK [run_hook : Ensure log directory exists path={{ log_path | dirname }}, state=directory, mode=0755] *** 2025-12-13 06:54:34,808 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:54:34 +0000 (0:00:00.022) 0:01:30.957 ***** 2025-12-13 06:54:34,808 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:54:34 +0000 (0:00:00.022) 0:01:30.956 ***** 2025-12-13 06:54:34,963 p=31853 u=zuul n=ansible | ok: [localhost] 2025-12-13 06:54:34,970 p=31853 u=zuul n=ansible | TASK [run_hook : Ensure artifacts directory exists path={{ cifmw_basedir }}/artifacts, state=directory, mode=0755] *** 2025-12-13 06:54:34,970 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:54:34 +0000 (0:00:00.162) 0:01:31.120 ***** 2025-12-13 06:54:34,970 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:54:34 +0000 (0:00:00.162) 0:01:31.118 ***** 2025-12-13 06:54:35,128 p=31853 u=zuul n=ansible | ok: [localhost] 2025-12-13 06:54:35,137 p=31853 u=zuul n=ansible | TASK [run_hook : Run hook without retry - Download needed tools] *************** 2025-12-13 06:54:35,137 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:54:35 +0000 (0:00:00.166) 0:01:31.287 ***** 2025-12-13 06:54:35,137 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:54:35 +0000 (0:00:00.166) 0:01:31.285 ***** 2025-12-13 06:54:35,181 p=31853 u=zuul n=ansible | Follow script's output here: /home/zuul/ci-framework-data/logs/ci_script_000_run_hook_without_retry.log 2025-12-13 06:55:06,459 p=31853 u=zuul n=ansible | changed: [localhost] 2025-12-13 06:55:06,467 p=31853 u=zuul n=ansible | TASK [run_hook : Run hook with retry - Download needed tools] ****************** 2025-12-13 06:55:06,467 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:55:06 +0000 (0:00:31.329) 0:02:02.617 ***** 2025-12-13 06:55:06,467 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:55:06 +0000 (0:00:31.329) 0:02:02.615 ***** 2025-12-13 06:55:06,484 p=31853 u=zuul n=ansible | skipping: [localhost] 2025-12-13 06:55:06,491 p=31853 u=zuul n=ansible | TASK [run_hook : Check if we have a file path={{ cifmw_basedir }}/artifacts/{{ step }}_{{ hook_name }}.yml] *** 2025-12-13 06:55:06,492 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:55:06 +0000 (0:00:00.024) 0:02:02.641 ***** 2025-12-13 06:55:06,492 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:55:06 +0000 (0:00:00.024) 0:02:02.640 ***** 2025-12-13 06:55:06,642 p=31853 u=zuul n=ansible | ok: [localhost] 2025-12-13 06:55:06,650 p=31853 u=zuul n=ansible | TASK [run_hook : Load generated content in main playbook file={{ cifmw_basedir }}/artifacts/{{ step }}_{{ hook_name }}.yml] *** 2025-12-13 06:55:06,650 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:55:06 +0000 (0:00:00.158) 0:02:02.800 ***** 2025-12-13 06:55:06,650 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:55:06 +0000 (0:00:00.158) 0:02:02.798 ***** 2025-12-13 06:55:06,663 p=31853 u=zuul n=ansible | skipping: [localhost] 2025-12-13 06:55:06,693 p=31853 u=zuul n=ansible | PLAY [Prepare host virtualization] ********************************************* 2025-12-13 06:55:06,709 p=31853 u=zuul n=ansible | TASK [cifmw_setup : Load parameters files dir={{ cifmw_basedir }}/artifacts/parameters] *** 2025-12-13 06:55:06,709 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:55:06 +0000 (0:00:00.058) 0:02:02.858 ***** 2025-12-13 06:55:06,709 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:55:06 +0000 (0:00:00.058) 0:02:02.857 ***** 2025-12-13 06:55:06,750 p=31853 u=zuul n=ansible | ok: [localhost] 2025-12-13 06:55:06,758 p=31853 u=zuul n=ansible | TASK [Ensure libvirt is present/configured name=libvirt_manager] *************** 2025-12-13 06:55:06,758 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:55:06 +0000 (0:00:00.049) 0:02:02.908 ***** 2025-12-13 06:55:06,759 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:55:06 +0000 (0:00:00.049) 0:02:02.907 ***** 2025-12-13 06:55:06,779 p=31853 u=zuul n=ansible | skipping: [localhost] 2025-12-13 06:55:06,785 p=31853 u=zuul n=ansible | TASK [Perpare OpenShift provisioner node name=openshift_provisioner_node] ****** 2025-12-13 06:55:06,785 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:55:06 +0000 (0:00:00.026) 0:02:02.935 ***** 2025-12-13 06:55:06,785 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:55:06 +0000 (0:00:00.026) 0:02:02.934 ***** 2025-12-13 06:55:06,803 p=31853 u=zuul n=ansible | skipping: [localhost] 2025-12-13 06:55:06,832 p=31853 u=zuul n=ansible | PLAY [Run cifmw_setup infra, build package, container and operators, deploy EDPM] *** 2025-12-13 06:55:06,863 p=31853 u=zuul n=ansible | TASK [cifmw_setup : Load parameters files dir={{ cifmw_basedir }}/artifacts/parameters] *** 2025-12-13 06:55:06,864 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:55:06 +0000 (0:00:00.078) 0:02:03.013 ***** 2025-12-13 06:55:06,864 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:55:06 +0000 (0:00:00.078) 0:02:03.012 ***** 2025-12-13 06:55:06,943 p=31853 u=zuul n=ansible | ok: [localhost] 2025-12-13 06:55:06,952 p=31853 u=zuul n=ansible | TASK [networking_mapper : Check for Networking Environment Definition file existence path={{ cifmw_networking_mapper_networking_env_def_path }}] *** 2025-12-13 06:55:06,953 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:55:06 +0000 (0:00:00.088) 0:02:03.102 ***** 2025-12-13 06:55:06,953 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:55:06 +0000 (0:00:00.088) 0:02:03.101 ***** 2025-12-13 06:55:07,107 p=31853 u=zuul n=ansible | ok: [localhost] 2025-12-13 06:55:07,114 p=31853 u=zuul n=ansible | TASK [networking_mapper : Check for Networking Definition file existance that=['_net_env_def_stat.stat.exists'], msg=Ensure that the Networking Environment Definition file exists in {{ cifmw_networking_mapper_networking_env_def_path }}, quiet=True] *** 2025-12-13 06:55:07,114 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:55:07 +0000 (0:00:00.161) 0:02:03.264 ***** 2025-12-13 06:55:07,115 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:55:07 +0000 (0:00:00.161) 0:02:03.263 ***** 2025-12-13 06:55:07,134 p=31853 u=zuul n=ansible | skipping: [localhost] 2025-12-13 06:55:07,141 p=31853 u=zuul n=ansible | TASK [networking_mapper : Load the Networking Definition from file path={{ cifmw_networking_mapper_networking_env_def_path }}] *** 2025-12-13 06:55:07,141 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:55:07 +0000 (0:00:00.026) 0:02:03.290 ***** 2025-12-13 06:55:07,141 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:55:07 +0000 (0:00:00.026) 0:02:03.289 ***** 2025-12-13 06:55:07,196 p=31853 u=zuul n=ansible | skipping: [localhost] 2025-12-13 06:55:07,202 p=31853 u=zuul n=ansible | TASK [networking_mapper : Set cifmw_networking_env_definition is present cifmw_networking_env_definition={{ _net_env_def_slurp['content'] | b64decode | from_yaml }}, cacheable=True] *** 2025-12-13 06:55:07,202 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:55:07 +0000 (0:00:00.061) 0:02:03.352 ***** 2025-12-13 06:55:07,202 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:55:07 +0000 (0:00:00.061) 0:02:03.351 ***** 2025-12-13 06:55:07,221 p=31853 u=zuul n=ansible | skipping: [localhost] 2025-12-13 06:55:07,234 p=31853 u=zuul n=ansible | TASK [Deploy OCP using Hive name=hive] ***************************************** 2025-12-13 06:55:07,234 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:55:07 +0000 (0:00:00.031) 0:02:03.384 ***** 2025-12-13 06:55:07,234 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:55:07 +0000 (0:00:00.031) 0:02:03.382 ***** 2025-12-13 06:55:07,251 p=31853 u=zuul n=ansible | skipping: [localhost] 2025-12-13 06:55:07,259 p=31853 u=zuul n=ansible | TASK [Prepare CRC name=rhol_crc] *********************************************** 2025-12-13 06:55:07,259 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:55:07 +0000 (0:00:00.024) 0:02:03.408 ***** 2025-12-13 06:55:07,259 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:55:07 +0000 (0:00:00.024) 0:02:03.407 ***** 2025-12-13 06:55:07,278 p=31853 u=zuul n=ansible | skipping: [localhost] 2025-12-13 06:55:07,286 p=31853 u=zuul n=ansible | TASK [Deploy OpenShift cluster using dev-scripts name=devscripts] ************** 2025-12-13 06:55:07,286 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:55:07 +0000 (0:00:00.027) 0:02:03.435 ***** 2025-12-13 06:55:07,286 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:55:07 +0000 (0:00:00.027) 0:02:03.434 ***** 2025-12-13 06:55:07,303 p=31853 u=zuul n=ansible | skipping: [localhost] 2025-12-13 06:55:07,311 p=31853 u=zuul n=ansible | TASK [openshift_login : Ensure output directory exists path={{ cifmw_openshift_login_basedir }}/artifacts, state=directory, mode=0755] *** 2025-12-13 06:55:07,311 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:55:07 +0000 (0:00:00.025) 0:02:03.461 ***** 2025-12-13 06:55:07,311 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:55:07 +0000 (0:00:00.025) 0:02:03.459 ***** 2025-12-13 06:55:07,471 p=31853 u=zuul n=ansible | ok: [localhost] 2025-12-13 06:55:07,478 p=31853 u=zuul n=ansible | TASK [openshift_login : OpenShift login _raw_params=login.yml] ***************** 2025-12-13 06:55:07,478 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:55:07 +0000 (0:00:00.167) 0:02:03.628 ***** 2025-12-13 06:55:07,479 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:55:07 +0000 (0:00:00.167) 0:02:03.627 ***** 2025-12-13 06:55:07,504 p=31853 u=zuul n=ansible | included: /home/zuul/src/github.com/openstack-k8s-operators/ci-framework/roles/openshift_login/tasks/login.yml for localhost 2025-12-13 06:55:07,515 p=31853 u=zuul n=ansible | TASK [openshift_login : Check if the password file is present path={{ cifmw_openshift_login_password_file | default(cifmw_openshift_password_file) }}] *** 2025-12-13 06:55:07,515 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:55:07 +0000 (0:00:00.036) 0:02:03.665 ***** 2025-12-13 06:55:07,515 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:55:07 +0000 (0:00:00.036) 0:02:03.664 ***** 2025-12-13 06:55:07,534 p=31853 u=zuul n=ansible | skipping: [localhost] 2025-12-13 06:55:07,541 p=31853 u=zuul n=ansible | TASK [openshift_login : Fetch user password content src={{ cifmw_openshift_login_password_file | default(cifmw_openshift_password_file) }}] *** 2025-12-13 06:55:07,541 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:55:07 +0000 (0:00:00.026) 0:02:03.691 ***** 2025-12-13 06:55:07,542 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:55:07 +0000 (0:00:00.026) 0:02:03.690 ***** 2025-12-13 06:55:07,561 p=31853 u=zuul n=ansible | skipping: [localhost] 2025-12-13 06:55:07,568 p=31853 u=zuul n=ansible | TASK [openshift_login : Set user password as a fact cifmw_openshift_login_password={{ cifmw_openshift_login_password_file_slurp.content | b64decode }}, cacheable=True] *** 2025-12-13 06:55:07,568 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:55:07 +0000 (0:00:00.026) 0:02:03.718 ***** 2025-12-13 06:55:07,568 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:55:07 +0000 (0:00:00.026) 0:02:03.717 ***** 2025-12-13 06:55:07,588 p=31853 u=zuul n=ansible | skipping: [localhost] 2025-12-13 06:55:07,596 p=31853 u=zuul n=ansible | TASK [openshift_login : Set role variables cifmw_openshift_login_kubeconfig={{ cifmw_openshift_login_kubeconfig | default(cifmw_openshift_kubeconfig) | default( ansible_env.KUBECONFIG if 'KUBECONFIG' in ansible_env else cifmw_openshift_login_kubeconfig_default_path ) | trim }}, cifmw_openshift_login_user={{ cifmw_openshift_login_user | default(cifmw_openshift_user) | default(omit) }}, cifmw_openshift_login_password={{ cifmw_openshift_login_password | default(cifmw_openshift_password) | default(omit) }}, cifmw_openshift_login_api={{ cifmw_openshift_login_api | default(cifmw_openshift_api) | default(omit) }}, cifmw_openshift_login_cert_login={{ cifmw_openshift_login_cert_login | default(false)}}, cifmw_openshift_login_provided_token={{ cifmw_openshift_provided_token | default(omit) }}, cacheable=True] *** 2025-12-13 06:55:07,596 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:55:07 +0000 (0:00:00.028) 0:02:03.746 ***** 2025-12-13 06:55:07,597 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:55:07 +0000 (0:00:00.028) 0:02:03.745 ***** 2025-12-13 06:55:07,625 p=31853 u=zuul n=ansible | ok: [localhost] 2025-12-13 06:55:07,631 p=31853 u=zuul n=ansible | TASK [openshift_login : Check if kubeconfig exists path={{ cifmw_openshift_login_kubeconfig }}] *** 2025-12-13 06:55:07,631 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:55:07 +0000 (0:00:00.034) 0:02:03.781 ***** 2025-12-13 06:55:07,631 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:55:07 +0000 (0:00:00.034) 0:02:03.780 ***** 2025-12-13 06:55:07,780 p=31853 u=zuul n=ansible | ok: [localhost] 2025-12-13 06:55:07,787 p=31853 u=zuul n=ansible | TASK [openshift_login : Assert that enough data is provided to log in to OpenShift that=cifmw_openshift_login_kubeconfig_stat.stat.exists or (cifmw_openshift_login_provided_token is defined and cifmw_openshift_login_provided_token != '') or ( (cifmw_openshift_login_user is defined) and (cifmw_openshift_login_password is defined) and (cifmw_openshift_login_api is defined) ), msg=If an existing kubeconfig is not provided user/pwd or provided/initial token and API URL must be given] *** 2025-12-13 06:55:07,787 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:55:07 +0000 (0:00:00.155) 0:02:03.937 ***** 2025-12-13 06:55:07,787 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:55:07 +0000 (0:00:00.155) 0:02:03.936 ***** 2025-12-13 06:55:07,810 p=31853 u=zuul n=ansible | ok: [localhost] => changed: false msg: All assertions passed 2025-12-13 06:55:07,817 p=31853 u=zuul n=ansible | TASK [openshift_login : Fetch kubeconfig content src={{ cifmw_openshift_login_kubeconfig }}] *** 2025-12-13 06:55:07,817 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:55:07 +0000 (0:00:00.030) 0:02:03.967 ***** 2025-12-13 06:55:07,817 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:55:07 +0000 (0:00:00.030) 0:02:03.966 ***** 2025-12-13 06:55:07,835 p=31853 u=zuul n=ansible | skipping: [localhost] 2025-12-13 06:55:07,843 p=31853 u=zuul n=ansible | TASK [openshift_login : Fetch x509 key based users cifmw_openshift_login_key_based_users={{ ( cifmw_openshift_login_kubeconfig_content_b64.content | b64decode | from_yaml ). users | default([]) | selectattr('user.client-certificate-data', 'defined') | map(attribute="name") | map("split", "/") | map("first") }}, cacheable=True] *** 2025-12-13 06:55:07,843 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:55:07 +0000 (0:00:00.026) 0:02:03.993 ***** 2025-12-13 06:55:07,843 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:55:07 +0000 (0:00:00.026) 0:02:03.992 ***** 2025-12-13 06:55:07,861 p=31853 u=zuul n=ansible | skipping: [localhost] 2025-12-13 06:55:07,869 p=31853 u=zuul n=ansible | TASK [openshift_login : Assign key based user if not provided and available cifmw_openshift_login_user={{ (cifmw_openshift_login_assume_cert_system_user | ternary('system:', '')) + (cifmw_openshift_login_key_based_users | map('replace', 'system:', '') | unique | first) }}, cifmw_openshift_login_cert_login=True, cacheable=True] *** 2025-12-13 06:55:07,869 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:55:07 +0000 (0:00:00.025) 0:02:04.019 ***** 2025-12-13 06:55:07,869 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:55:07 +0000 (0:00:00.025) 0:02:04.017 ***** 2025-12-13 06:55:07,886 p=31853 u=zuul n=ansible | skipping: [localhost] 2025-12-13 06:55:07,895 p=31853 u=zuul n=ansible | TASK [openshift_login : Set the retry count cifmw_openshift_login_retries_cnt={{ 0 if cifmw_openshift_login_retries_cnt is undefined else cifmw_openshift_login_retries_cnt|int + 1 }}] *** 2025-12-13 06:55:07,895 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:55:07 +0000 (0:00:00.026) 0:02:04.045 ***** 2025-12-13 06:55:07,895 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:55:07 +0000 (0:00:00.026) 0:02:04.044 ***** 2025-12-13 06:55:07,916 p=31853 u=zuul n=ansible | ok: [localhost] 2025-12-13 06:55:07,924 p=31853 u=zuul n=ansible | TASK [openshift_login : Fetch token _raw_params=try_login.yml] ***************** 2025-12-13 06:55:07,924 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:55:07 +0000 (0:00:00.029) 0:02:04.074 ***** 2025-12-13 06:55:07,925 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:55:07 +0000 (0:00:00.029) 0:02:04.073 ***** 2025-12-13 06:55:07,946 p=31853 u=zuul n=ansible | included: /home/zuul/src/github.com/openstack-k8s-operators/ci-framework/roles/openshift_login/tasks/try_login.yml for localhost 2025-12-13 06:55:07,955 p=31853 u=zuul n=ansible | TASK [openshift_login : Try get OpenShift access token _raw_params=oc whoami -t] *** 2025-12-13 06:55:07,955 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:55:07 +0000 (0:00:00.030) 0:02:04.105 ***** 2025-12-13 06:55:07,955 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:55:07 +0000 (0:00:00.030) 0:02:04.104 ***** 2025-12-13 06:55:07,969 p=31853 u=zuul n=ansible | skipping: [localhost] 2025-12-13 06:55:07,977 p=31853 u=zuul n=ansible | TASK [openshift_login : Fetch OpenShift token output_dir={{ cifmw_openshift_login_basedir }}/artifacts, script=oc login {%- if cifmw_openshift_login_provided_token is not defined %} {%- if cifmw_openshift_login_user is defined %} -u {{ cifmw_openshift_login_user }} {%- endif %} {%- if cifmw_openshift_login_password is defined %} -p {{ cifmw_openshift_login_password }} {%- endif %} {% else %} --token={{ cifmw_openshift_login_provided_token }} {%- endif %} {%- if cifmw_openshift_login_skip_tls_verify|bool %} --insecure-skip-tls-verify=true {%- endif %} {%- if cifmw_openshift_login_api is defined %} {{ cifmw_openshift_login_api }} {%- endif %}] *** 2025-12-13 06:55:07,977 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:55:07 +0000 (0:00:00.021) 0:02:04.127 ***** 2025-12-13 06:55:07,977 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:55:07 +0000 (0:00:00.021) 0:02:04.125 ***** 2025-12-13 06:55:08,020 p=31853 u=zuul n=ansible | Follow script's output here: /home/zuul/ci-framework-data/logs/ci_script_001_fetch_openshift.log 2025-12-13 06:55:08,290 p=31853 u=zuul n=ansible | changed: [localhost] 2025-12-13 06:55:08,297 p=31853 u=zuul n=ansible | TASK [openshift_login : Ensure kubeconfig is provided that=cifmw_openshift_login_kubeconfig != ""] *** 2025-12-13 06:55:08,297 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:55:08 +0000 (0:00:00.320) 0:02:04.447 ***** 2025-12-13 06:55:08,297 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:55:08 +0000 (0:00:00.320) 0:02:04.445 ***** 2025-12-13 06:55:08,314 p=31853 u=zuul n=ansible | ok: [localhost] => changed: false msg: All assertions passed 2025-12-13 06:55:08,321 p=31853 u=zuul n=ansible | TASK [openshift_login : Fetch new OpenShift access token _raw_params=oc whoami -t] *** 2025-12-13 06:55:08,322 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:55:08 +0000 (0:00:00.024) 0:02:04.471 ***** 2025-12-13 06:55:08,322 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:55:08 +0000 (0:00:00.024) 0:02:04.470 ***** 2025-12-13 06:55:08,568 p=31853 u=zuul n=ansible | changed: [localhost] 2025-12-13 06:55:08,576 p=31853 u=zuul n=ansible | TASK [openshift_login : Set new OpenShift token cifmw_openshift_login_token={{ (not cifmw_openshift_login_new_token_out.skipped | default(false)) | ternary(cifmw_openshift_login_new_token_out.stdout, cifmw_openshift_login_whoami_out.stdout) }}, cacheable=True] *** 2025-12-13 06:55:08,576 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:55:08 +0000 (0:00:00.254) 0:02:04.726 ***** 2025-12-13 06:55:08,576 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:55:08 +0000 (0:00:00.254) 0:02:04.724 ***** 2025-12-13 06:55:08,599 p=31853 u=zuul n=ansible | ok: [localhost] 2025-12-13 06:55:08,606 p=31853 u=zuul n=ansible | TASK [openshift_login : Fetch OpenShift API URL _raw_params=oc whoami --show-server=true] *** 2025-12-13 06:55:08,606 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:55:08 +0000 (0:00:00.029) 0:02:04.755 ***** 2025-12-13 06:55:08,606 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:55:08 +0000 (0:00:00.029) 0:02:04.754 ***** 2025-12-13 06:55:08,844 p=31853 u=zuul n=ansible | changed: [localhost] 2025-12-13 06:55:08,852 p=31853 u=zuul n=ansible | TASK [openshift_login : Fetch OpenShift kubeconfig context _raw_params=oc whoami -c] *** 2025-12-13 06:55:08,852 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:55:08 +0000 (0:00:00.246) 0:02:05.001 ***** 2025-12-13 06:55:08,852 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:55:08 +0000 (0:00:00.246) 0:02:05.000 ***** 2025-12-13 06:55:09,094 p=31853 u=zuul n=ansible | changed: [localhost] 2025-12-13 06:55:09,101 p=31853 u=zuul n=ansible | TASK [openshift_login : Fetch OpenShift current user _raw_params=oc whoami] **** 2025-12-13 06:55:09,102 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:55:09 +0000 (0:00:00.249) 0:02:05.251 ***** 2025-12-13 06:55:09,102 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:55:09 +0000 (0:00:00.249) 0:02:05.250 ***** 2025-12-13 06:55:09,352 p=31853 u=zuul n=ansible | changed: [localhost] 2025-12-13 06:55:09,360 p=31853 u=zuul n=ansible | TASK [openshift_login : Set OpenShift user, context and API facts cifmw_openshift_login_api={{ cifmw_openshift_login_api_out.stdout }}, cifmw_openshift_login_context={{ cifmw_openshift_login_context_out.stdout }}, cifmw_openshift_login_user={{ _oauth_user }}, cifmw_openshift_kubeconfig={{ cifmw_openshift_login_kubeconfig }}, cifmw_openshift_api={{ cifmw_openshift_login_api_out.stdout }}, cifmw_openshift_context={{ cifmw_openshift_login_context_out.stdout }}, cifmw_openshift_user={{ _oauth_user }}, cifmw_openshift_token={{ cifmw_openshift_login_token | default(omit) }}, cifmw_install_yamls_environment={{ ( cifmw_install_yamls_environment | combine({'KUBECONFIG': cifmw_openshift_login_kubeconfig}) ) if cifmw_install_yamls_environment is defined else omit }}, cacheable=True] *** 2025-12-13 06:55:09,360 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:55:09 +0000 (0:00:00.258) 0:02:05.510 ***** 2025-12-13 06:55:09,360 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:55:09 +0000 (0:00:00.258) 0:02:05.509 ***** 2025-12-13 06:55:09,390 p=31853 u=zuul n=ansible | ok: [localhost] 2025-12-13 06:55:09,397 p=31853 u=zuul n=ansible | TASK [openshift_login : Create the openshift_login parameters file dest={{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts/parameters/openshift-login-params.yml, content={{ cifmw_openshift_login_params_content | from_yaml | to_nice_yaml }}, mode=0600] *** 2025-12-13 06:55:09,397 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:55:09 +0000 (0:00:00.037) 0:02:05.547 ***** 2025-12-13 06:55:09,397 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:55:09 +0000 (0:00:00.037) 0:02:05.546 ***** 2025-12-13 06:55:09,756 p=31853 u=zuul n=ansible | changed: [localhost] 2025-12-13 06:55:09,763 p=31853 u=zuul n=ansible | TASK [openshift_login : Read the install yamls parameters file path={{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts/parameters/install-yamls-params.yml] *** 2025-12-13 06:55:09,764 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:55:09 +0000 (0:00:00.366) 0:02:05.913 ***** 2025-12-13 06:55:09,764 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:55:09 +0000 (0:00:00.366) 0:02:05.912 ***** 2025-12-13 06:55:10,046 p=31853 u=zuul n=ansible | ok: [localhost] 2025-12-13 06:55:10,054 p=31853 u=zuul n=ansible | TASK [openshift_login : Append the KUBECONFIG to the install yamls parameters content={{ cifmw_openshift_login_install_yamls_artifacts_slurp['content'] | b64decode | from_yaml | combine( { 'cifmw_install_yamls_environment': { 'KUBECONFIG': cifmw_openshift_login_kubeconfig } }, recursive=true) | to_nice_yaml }}, dest={{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts/parameters/install-yamls-params.yml, mode=0600] *** 2025-12-13 06:55:10,054 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:55:10 +0000 (0:00:00.290) 0:02:06.203 ***** 2025-12-13 06:55:10,054 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:55:10 +0000 (0:00:00.290) 0:02:06.202 ***** 2025-12-13 06:55:10,424 p=31853 u=zuul n=ansible | changed: [localhost] 2025-12-13 06:55:10,436 p=31853 u=zuul n=ansible | TASK [openshift_setup : Ensure output directory exists path={{ cifmw_openshift_setup_basedir }}/artifacts, state=directory, mode=0755] *** 2025-12-13 06:55:10,436 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:55:10 +0000 (0:00:00.382) 0:02:06.586 ***** 2025-12-13 06:55:10,436 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:55:10 +0000 (0:00:00.382) 0:02:06.585 ***** 2025-12-13 06:55:10,600 p=31853 u=zuul n=ansible | ok: [localhost] 2025-12-13 06:55:10,608 p=31853 u=zuul n=ansible | TASK [openshift_setup : Fetch namespaces to create cifmw_openshift_setup_namespaces={{ (( ([cifmw_install_yamls_defaults['NAMESPACE']] + ([cifmw_install_yamls_defaults['OPERATOR_NAMESPACE']] if 'OPERATOR_NAMESPACE' is in cifmw_install_yamls_defaults else []) ) if cifmw_install_yamls_defaults is defined else [] ) + cifmw_openshift_setup_create_namespaces) | unique }}] *** 2025-12-13 06:55:10,608 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:55:10 +0000 (0:00:00.171) 0:02:06.758 ***** 2025-12-13 06:55:10,608 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:55:10 +0000 (0:00:00.171) 0:02:06.757 ***** 2025-12-13 06:55:10,631 p=31853 u=zuul n=ansible | ok: [localhost] 2025-12-13 06:55:10,642 p=31853 u=zuul n=ansible | TASK [openshift_setup : Create required namespaces kubeconfig={{ cifmw_openshift_kubeconfig }}, api_key={{ cifmw_openshift_token | default(omit)}}, context={{ cifmw_openshift_context | default(omit) }}, name={{ item }}, kind=Namespace, state=present] *** 2025-12-13 06:55:10,642 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:55:10 +0000 (0:00:00.033) 0:02:06.791 ***** 2025-12-13 06:55:10,642 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:55:10 +0000 (0:00:00.033) 0:02:06.790 ***** 2025-12-13 06:55:11,425 p=31853 u=zuul n=ansible | changed: [localhost] => (item=openstack) 2025-12-13 06:55:12,031 p=31853 u=zuul n=ansible | changed: [localhost] => (item=openstack-operators) 2025-12-13 06:55:12,042 p=31853 u=zuul n=ansible | TASK [openshift_setup : Get internal OpenShift registry route kubeconfig={{ cifmw_openshift_kubeconfig }}, api_key={{ cifmw_openshift_token | default(omit)}}, context={{ cifmw_openshift_context | default(omit)}}, kind=Route, name=default-route, namespace=openshift-image-registry] *** 2025-12-13 06:55:12,042 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:55:12 +0000 (0:00:01.399) 0:02:08.191 ***** 2025-12-13 06:55:12,042 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:55:12 +0000 (0:00:01.399) 0:02:08.190 ***** 2025-12-13 06:55:12,055 p=31853 u=zuul n=ansible | skipping: [localhost] 2025-12-13 06:55:12,063 p=31853 u=zuul n=ansible | TASK [openshift_setup : Allow anonymous image-pulls in CRC registry for targeted namespaces state=present, kubeconfig={{ cifmw_openshift_kubeconfig }}, api_key={{ cifmw_openshift_token | default(omit)}}, context={{ cifmw_openshift_context | default(omit)}}, definition={'kind': 'RoleBinding', 'apiVersion': 'rbac.authorization.k8s.io/v1', 'metadata': {'name': 'system:image-puller', 'namespace': '{{ item }}'}, 'subjects': [{'kind': 'User', 'name': 'system:anonymous'}, {'kind': 'User', 'name': 'system:unauthenticated'}], 'roleRef': {'kind': 'ClusterRole', 'name': 'system:image-puller'}}] *** 2025-12-13 06:55:12,063 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:55:12 +0000 (0:00:00.021) 0:02:08.213 ***** 2025-12-13 06:55:12,063 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:55:12 +0000 (0:00:00.021) 0:02:08.212 ***** 2025-12-13 06:55:12,082 p=31853 u=zuul n=ansible | skipping: [localhost] => (item=openstack) 2025-12-13 06:55:12,082 p=31853 u=zuul n=ansible | skipping: [localhost] => (item=openstack-operators) 2025-12-13 06:55:12,083 p=31853 u=zuul n=ansible | skipping: [localhost] 2025-12-13 06:55:12,090 p=31853 u=zuul n=ansible | TASK [openshift_setup : Wait for the image registry to be ready kind=Deployment, name=image-registry, namespace=openshift-image-registry, kubeconfig={{ cifmw_openshift_kubeconfig }}, api_key={{ cifmw_openshift_token | default(omit)}}, context={{ cifmw_openshift_context | default(omit)}}, wait=True, wait_sleep=10, wait_timeout=600, wait_condition={'type': 'Available', 'status': 'True'}] *** 2025-12-13 06:55:12,090 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:55:12 +0000 (0:00:00.026) 0:02:08.240 ***** 2025-12-13 06:55:12,090 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:55:12 +0000 (0:00:00.026) 0:02:08.238 ***** 2025-12-13 06:55:12,109 p=31853 u=zuul n=ansible | skipping: [localhost] 2025-12-13 06:55:12,117 p=31853 u=zuul n=ansible | TASK [openshift_setup : Login into OpenShift internal registry output_dir={{ cifmw_openshift_setup_basedir }}/artifacts, script=podman login -u {{ cifmw_openshift_user }} -p {{ cifmw_openshift_token }} {%- if cifmw_openshift_setup_skip_internal_registry_tls_verify|bool %} --tls-verify=false {%- endif %} {{ cifmw_openshift_setup_registry_default_route.resources[0].spec.host }}] *** 2025-12-13 06:55:12,117 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:55:12 +0000 (0:00:00.026) 0:02:08.267 ***** 2025-12-13 06:55:12,117 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:55:12 +0000 (0:00:00.026) 0:02:08.265 ***** 2025-12-13 06:55:12,136 p=31853 u=zuul n=ansible | skipping: [localhost] 2025-12-13 06:55:12,143 p=31853 u=zuul n=ansible | TASK [Ensure we have custom CA installed on host role=install_ca] ************** 2025-12-13 06:55:12,143 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:55:12 +0000 (0:00:00.026) 0:02:08.293 ***** 2025-12-13 06:55:12,143 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:55:12 +0000 (0:00:00.026) 0:02:08.291 ***** 2025-12-13 06:55:12,161 p=31853 u=zuul n=ansible | skipping: [localhost] 2025-12-13 06:55:12,168 p=31853 u=zuul n=ansible | TASK [openshift_setup : Update ca bundle _raw_params=update-ca-trust extract] *** 2025-12-13 06:55:12,168 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:55:12 +0000 (0:00:00.025) 0:02:08.318 ***** 2025-12-13 06:55:12,168 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:55:12 +0000 (0:00:00.025) 0:02:08.317 ***** 2025-12-13 06:55:12,185 p=31853 u=zuul n=ansible | skipping: [localhost] 2025-12-13 06:55:12,193 p=31853 u=zuul n=ansible | TASK [openshift_setup : Slurp CAs file src={{ cifmw_openshift_setup_ca_bundle_path }}] *** 2025-12-13 06:55:12,193 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:55:12 +0000 (0:00:00.025) 0:02:08.343 ***** 2025-12-13 06:55:12,193 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:55:12 +0000 (0:00:00.025) 0:02:08.342 ***** 2025-12-13 06:55:12,212 p=31853 u=zuul n=ansible | skipping: [localhost] 2025-12-13 06:55:12,221 p=31853 u=zuul n=ansible | TASK [openshift_setup : Create config map with registry CAs kubeconfig={{ cifmw_openshift_kubeconfig }}, api_key={{ cifmw_openshift_token | default(omit)}}, context={{ cifmw_openshift_context | default(omit)}}, definition={'apiVersion': 'v1', 'kind': 'ConfigMap', 'metadata': {'namespace': 'openshift-config', 'name': 'registry-cas'}, 'data': '{{ _config_map_data | items2dict }}'}] *** 2025-12-13 06:55:12,221 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:55:12 +0000 (0:00:00.027) 0:02:08.371 ***** 2025-12-13 06:55:12,221 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:55:12 +0000 (0:00:00.027) 0:02:08.369 ***** 2025-12-13 06:55:12,239 p=31853 u=zuul n=ansible | skipping: [localhost] 2025-12-13 06:55:12,246 p=31853 u=zuul n=ansible | TASK [openshift_setup : Install Red Hat CA for pulling images from internal registry kubeconfig={{ cifmw_openshift_kubeconfig }}, api_key={{ cifmw_openshift_token | default(omit)}}, context={{ cifmw_openshift_context | default(omit)}}, merge_type=merge, definition={'apiVersion': 'config.openshift.io/v1', 'kind': 'Image', 'metadata': {'name': 'cluster'}, 'spec': {'additionalTrustedCA': {'name': 'registry-cas'}}}] *** 2025-12-13 06:55:12,246 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:55:12 +0000 (0:00:00.025) 0:02:08.396 ***** 2025-12-13 06:55:12,246 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:55:12 +0000 (0:00:00.025) 0:02:08.395 ***** 2025-12-13 06:55:12,264 p=31853 u=zuul n=ansible | skipping: [localhost] 2025-12-13 06:55:12,271 p=31853 u=zuul n=ansible | TASK [openshift_setup : Add insecure registry kubeconfig={{ cifmw_openshift_kubeconfig }}, api_key={{ cifmw_openshift_token | default(omit)}}, context={{ cifmw_openshift_context | default(omit)}}, merge_type=merge, definition={'apiVersion': 'config.openshift.io/v1', 'kind': 'Image', 'metadata': {'name': 'cluster'}, 'spec': {'registrySources': {'insecureRegistries': ['{{ cifmw_update_containers_registry }}'], 'allowedRegistries': '{{ all_registries }}'}}}] *** 2025-12-13 06:55:12,271 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:55:12 +0000 (0:00:00.024) 0:02:08.421 ***** 2025-12-13 06:55:12,271 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:55:12 +0000 (0:00:00.024) 0:02:08.419 ***** 2025-12-13 06:55:12,288 p=31853 u=zuul n=ansible | skipping: [localhost] 2025-12-13 06:55:12,296 p=31853 u=zuul n=ansible | TASK [openshift_setup : Create a ICSP with repository digest mirrors kubeconfig={{ cifmw_openshift_kubeconfig }}, api_key={{ cifmw_openshift_token | default(omit)}}, context={{ cifmw_openshift_context | default(omit)}}, definition={'apiVersion': 'operator.openshift.io/v1alpha1', 'kind': 'ImageContentSourcePolicy', 'metadata': {'name': 'registry-digest-mirrors'}, 'spec': {'repositoryDigestMirrors': '{{ cifmw_openshift_setup_digest_mirrors }}'}}] *** 2025-12-13 06:55:12,296 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:55:12 +0000 (0:00:00.024) 0:02:08.446 ***** 2025-12-13 06:55:12,296 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:55:12 +0000 (0:00:00.024) 0:02:08.444 ***** 2025-12-13 06:55:12,317 p=31853 u=zuul n=ansible | skipping: [localhost] 2025-12-13 06:55:12,324 p=31853 u=zuul n=ansible | TASK [openshift_setup : Gather network.operator info kubeconfig={{ cifmw_openshift_kubeconfig }}, api_key={{ cifmw_openshift_token | default(omit)}}, context={{ cifmw_openshift_context | default(omit)}}, api_version=operator.openshift.io/v1, kind=Network, name=cluster] *** 2025-12-13 06:55:12,324 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:55:12 +0000 (0:00:00.028) 0:02:08.474 ***** 2025-12-13 06:55:12,324 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:55:12 +0000 (0:00:00.028) 0:02:08.473 ***** 2025-12-13 06:55:13,087 p=31853 u=zuul n=ansible | ok: [localhost] 2025-12-13 06:55:13,098 p=31853 u=zuul n=ansible | TASK [openshift_setup : Patch network operator api_version=operator.openshift.io/v1, kubeconfig={{ cifmw_openshift_kubeconfig }}, kind=Network, name=cluster, persist_config=True, patch=[{'path': '/spec/defaultNetwork/ovnKubernetesConfig/gatewayConfig/routingViaHost', 'value': True, 'op': 'replace'}, {'path': '/spec/defaultNetwork/ovnKubernetesConfig/gatewayConfig/ipForwarding', 'value': 'Global', 'op': 'replace'}]] *** 2025-12-13 06:55:13,098 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:55:13 +0000 (0:00:00.773) 0:02:09.248 ***** 2025-12-13 06:55:13,098 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:55:13 +0000 (0:00:00.773) 0:02:09.247 ***** 2025-12-13 06:55:13,880 p=31853 u=zuul n=ansible | changed: [localhost] 2025-12-13 06:55:13,889 p=31853 u=zuul n=ansible | TASK [openshift_setup : Patch samples registry configuration kubeconfig={{ cifmw_openshift_kubeconfig }}, api_key={{ cifmw_openshift_token | default(omit)}}, context={{ cifmw_openshift_context | default(omit)}}, api_version=samples.operator.openshift.io/v1, kind=Config, name=cluster, patch=[{'op': 'replace', 'path': '/spec/samplesRegistry', 'value': 'registry.redhat.io'}]] *** 2025-12-13 06:55:13,889 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:55:13 +0000 (0:00:00.791) 0:02:10.039 ***** 2025-12-13 06:55:13,890 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:55:13 +0000 (0:00:00.791) 0:02:10.038 ***** 2025-12-13 06:55:14,515 p=31853 u=zuul n=ansible | changed: [localhost] 2025-12-13 06:55:14,523 p=31853 u=zuul n=ansible | TASK [openshift_setup : Delete the pods from openshift-marketplace namespace kind=Pod, state=absent, delete_all=True, kubeconfig={{ cifmw_openshift_kubeconfig }}, namespace=openshift-marketplace] *** 2025-12-13 06:55:14,523 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:55:14 +0000 (0:00:00.633) 0:02:10.672 ***** 2025-12-13 06:55:14,523 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:55:14 +0000 (0:00:00.633) 0:02:10.671 ***** 2025-12-13 06:55:14,537 p=31853 u=zuul n=ansible | skipping: [localhost] 2025-12-13 06:55:14,544 p=31853 u=zuul n=ansible | TASK [openshift_setup : Wait for openshift-marketplace pods to be running _raw_params=oc wait pod --all --for=condition=Ready -n openshift-marketplace --timeout=1m] *** 2025-12-13 06:55:14,544 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:55:14 +0000 (0:00:00.021) 0:02:10.694 ***** 2025-12-13 06:55:14,545 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:55:14 +0000 (0:00:00.021) 0:02:10.693 ***** 2025-12-13 06:55:14,558 p=31853 u=zuul n=ansible | skipping: [localhost] 2025-12-13 06:55:14,571 p=31853 u=zuul n=ansible | TASK [Deploy Observability operator. name=openshift_obs] *********************** 2025-12-13 06:55:14,571 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:55:14 +0000 (0:00:00.026) 0:02:10.720 ***** 2025-12-13 06:55:14,571 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:55:14 +0000 (0:00:00.026) 0:02:10.719 ***** 2025-12-13 06:55:14,588 p=31853 u=zuul n=ansible | skipping: [localhost] 2025-12-13 06:55:14,595 p=31853 u=zuul n=ansible | TASK [Deploy Metal3 BMHs name=deploy_bmh] ************************************** 2025-12-13 06:55:14,595 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:55:14 +0000 (0:00:00.024) 0:02:10.745 ***** 2025-12-13 06:55:14,595 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:55:14 +0000 (0:00:00.024) 0:02:10.743 ***** 2025-12-13 06:55:14,612 p=31853 u=zuul n=ansible | skipping: [localhost] 2025-12-13 06:55:14,619 p=31853 u=zuul n=ansible | TASK [Install certmanager operator role name=cert_manager] ********************* 2025-12-13 06:55:14,619 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:55:14 +0000 (0:00:00.024) 0:02:10.769 ***** 2025-12-13 06:55:14,619 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:55:14 +0000 (0:00:00.024) 0:02:10.768 ***** 2025-12-13 06:55:14,697 p=31853 u=zuul n=ansible | TASK [cert_manager : Create role needed directories path={{ cifmw_cert_manager_manifests_dir }}, state=directory, mode=0755] *** 2025-12-13 06:55:14,697 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:55:14 +0000 (0:00:00.077) 0:02:10.847 ***** 2025-12-13 06:55:14,697 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:55:14 +0000 (0:00:00.077) 0:02:10.846 ***** 2025-12-13 06:55:14,863 p=31853 u=zuul n=ansible | changed: [localhost] 2025-12-13 06:55:14,870 p=31853 u=zuul n=ansible | TASK [cert_manager : Create the cifmw_cert_manager_operator_namespace namespace" kubeconfig={{ cifmw_openshift_kubeconfig }}, api_key={{ cifmw_openshift_token | default(omit)}}, context={{ cifmw_openshift_context | default(omit) }}, name={{ cifmw_cert_manager_operator_namespace }}, kind=Namespace, state=present] *** 2025-12-13 06:55:14,870 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:55:14 +0000 (0:00:00.172) 0:02:11.019 ***** 2025-12-13 06:55:14,870 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:55:14 +0000 (0:00:00.172) 0:02:11.018 ***** 2025-12-13 06:55:15,482 p=31853 u=zuul n=ansible | changed: [localhost] 2025-12-13 06:55:15,491 p=31853 u=zuul n=ansible | TASK [cert_manager : Install from Release Manifest _raw_params=release_manifest.yml] *** 2025-12-13 06:55:15,491 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:55:15 +0000 (0:00:00.620) 0:02:11.640 ***** 2025-12-13 06:55:15,491 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:55:15 +0000 (0:00:00.620) 0:02:11.639 ***** 2025-12-13 06:55:15,513 p=31853 u=zuul n=ansible | included: /home/zuul/src/github.com/openstack-k8s-operators/ci-framework/roles/cert_manager/tasks/release_manifest.yml for localhost 2025-12-13 06:55:15,524 p=31853 u=zuul n=ansible | TASK [cert_manager : Download release manifests url={{ cifmw_cert_manager_release_manifest }}, dest={{ cifmw_cert_manager_manifests_dir }}/cert_manager_manifest.yml, mode=0664] *** 2025-12-13 06:55:15,524 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:55:15 +0000 (0:00:00.033) 0:02:11.673 ***** 2025-12-13 06:55:15,524 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:55:15 +0000 (0:00:00.033) 0:02:11.672 ***** 2025-12-13 06:55:16,081 p=31853 u=zuul n=ansible | changed: [localhost] 2025-12-13 06:55:16,089 p=31853 u=zuul n=ansible | TASK [cert_manager : Install cert-manager from release manifest kubeconfig={{ cifmw_openshift_kubeconfig }}, api_key={{ cifmw_openshift_token | default(omit)}}, context={{ cifmw_openshift_context | default(omit) }}, state=present, src={{ cifmw_cert_manager_manifests_dir }}/cert_manager_manifest.yml] *** 2025-12-13 06:55:16,089 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:55:16 +0000 (0:00:00.565) 0:02:12.239 ***** 2025-12-13 06:55:16,089 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:55:16 +0000 (0:00:00.565) 0:02:12.237 ***** 2025-12-13 06:55:18,272 p=31853 u=zuul n=ansible | changed: [localhost] 2025-12-13 06:55:18,299 p=31853 u=zuul n=ansible | TASK [cert_manager : Install from OLM Manifest _raw_params=olm_manifest.yml] *** 2025-12-13 06:55:18,299 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:55:18 +0000 (0:00:02.209) 0:02:14.448 ***** 2025-12-13 06:55:18,299 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:55:18 +0000 (0:00:02.209) 0:02:14.447 ***** 2025-12-13 06:55:18,313 p=31853 u=zuul n=ansible | skipping: [localhost] 2025-12-13 06:55:18,323 p=31853 u=zuul n=ansible | TASK [cert_manager : Check for cert-manager namspeace existance kubeconfig={{ cifmw_openshift_kubeconfig }}, api_key={{ cifmw_openshift_token | default(omit)}}, context={{ cifmw_openshift_context | default(omit) }}, name=cert-manager, kind=Namespace, field_selectors=['status.phase=Active']] *** 2025-12-13 06:55:18,323 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:55:18 +0000 (0:00:00.023) 0:02:14.472 ***** 2025-12-13 06:55:18,323 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:55:18 +0000 (0:00:00.023) 0:02:14.471 ***** 2025-12-13 06:55:18,933 p=31853 u=zuul n=ansible | ok: [localhost] 2025-12-13 06:55:18,940 p=31853 u=zuul n=ansible | TASK [cert_manager : Wait for cert-manager pods to be ready kubeconfig={{ cifmw_openshift_kubeconfig }}, api_key={{ cifmw_openshift_token | default(omit)}}, context={{ cifmw_openshift_context | default(omit) }}, namespace=cert-manager, kind=Pod, wait=True, wait_sleep=10, wait_timeout=600, wait_condition={'type': 'Ready', 'status': 'True'}, label_selectors=['app = {{ item }}']] *** 2025-12-13 06:55:18,940 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:55:18 +0000 (0:00:00.617) 0:02:15.090 ***** 2025-12-13 06:55:18,940 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:55:18 +0000 (0:00:00.617) 0:02:15.089 ***** 2025-12-13 06:55:29,578 p=31853 u=zuul n=ansible | ok: [localhost] => (item=cainjector) 2025-12-13 06:55:30,167 p=31853 u=zuul n=ansible | ok: [localhost] => (item=webhook) 2025-12-13 06:55:30,761 p=31853 u=zuul n=ansible | ok: [localhost] => (item=cert-manager) 2025-12-13 06:55:30,775 p=31853 u=zuul n=ansible | TASK [cert_manager : Create $HOME/bin dir path={{ ansible_user_dir }}/bin, state=directory, mode=0755] *** 2025-12-13 06:55:30,776 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:55:30 +0000 (0:00:11.835) 0:02:26.925 ***** 2025-12-13 06:55:30,776 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:55:30 +0000 (0:00:11.835) 0:02:26.924 ***** 2025-12-13 06:55:30,936 p=31853 u=zuul n=ansible | ok: [localhost] 2025-12-13 06:55:30,945 p=31853 u=zuul n=ansible | TASK [cert_manager : Install cert-manager cmctl CLI url=https://github.com/cert-manager/cmctl/releases/{{ cifmw_cert_manager_version }}/download/cmctl_{{ _os }}_{{ _arch }}, dest={{ ansible_user_dir }}/bin/cmctl, mode=0755] *** 2025-12-13 06:55:30,945 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:55:30 +0000 (0:00:00.169) 0:02:27.095 ***** 2025-12-13 06:55:30,945 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:55:30 +0000 (0:00:00.169) 0:02:27.093 ***** 2025-12-13 06:55:32,220 p=31853 u=zuul n=ansible | changed: [localhost] 2025-12-13 06:55:32,229 p=31853 u=zuul n=ansible | TASK [cert_manager : Verify cert_manager api _raw_params={{ ansible_user_dir }}/bin/cmctl check api --wait=2m] *** 2025-12-13 06:55:32,229 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:55:32 +0000 (0:00:01.283) 0:02:28.378 ***** 2025-12-13 06:55:32,229 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:55:32 +0000 (0:00:01.283) 0:02:28.377 ***** 2025-12-13 06:55:32,483 p=31853 u=zuul n=ansible | changed: [localhost] 2025-12-13 06:55:32,496 p=31853 u=zuul n=ansible | TASK [Configure hosts networking using nmstate name=ci_nmstate] **************** 2025-12-13 06:55:32,496 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:55:32 +0000 (0:00:00.266) 0:02:28.645 ***** 2025-12-13 06:55:32,496 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:55:32 +0000 (0:00:00.266) 0:02:28.644 ***** 2025-12-13 06:55:32,515 p=31853 u=zuul n=ansible | skipping: [localhost] 2025-12-13 06:55:32,521 p=31853 u=zuul n=ansible | TASK [Configure multus networks name=ci_multus] ******************************** 2025-12-13 06:55:32,522 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:55:32 +0000 (0:00:00.025) 0:02:28.671 ***** 2025-12-13 06:55:32,522 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:55:32 +0000 (0:00:00.025) 0:02:28.670 ***** 2025-12-13 06:55:32,537 p=31853 u=zuul n=ansible | skipping: [localhost] 2025-12-13 06:55:32,544 p=31853 u=zuul n=ansible | TASK [Deploy Sushy Emulator service pod name=sushy_emulator] ******************* 2025-12-13 06:55:32,544 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:55:32 +0000 (0:00:00.022) 0:02:28.693 ***** 2025-12-13 06:55:32,544 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:55:32 +0000 (0:00:00.022) 0:02:28.692 ***** 2025-12-13 06:55:32,559 p=31853 u=zuul n=ansible | skipping: [localhost] 2025-12-13 06:55:32,567 p=31853 u=zuul n=ansible | TASK [Setup Libvirt on controller name=libvirt_manager] ************************ 2025-12-13 06:55:32,567 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:55:32 +0000 (0:00:00.022) 0:02:28.716 ***** 2025-12-13 06:55:32,567 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:55:32 +0000 (0:00:00.022) 0:02:28.715 ***** 2025-12-13 06:55:32,582 p=31853 u=zuul n=ansible | skipping: [localhost] 2025-12-13 06:55:32,589 p=31853 u=zuul n=ansible | TASK [Prepare container package builder name=pkg_build] ************************ 2025-12-13 06:55:32,589 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:55:32 +0000 (0:00:00.022) 0:02:28.739 ***** 2025-12-13 06:55:32,589 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:55:32 +0000 (0:00:00.022) 0:02:28.737 ***** 2025-12-13 06:55:32,608 p=31853 u=zuul n=ansible | skipping: [localhost] 2025-12-13 06:55:32,616 p=31853 u=zuul n=ansible | TASK [run_hook : Assert parameters are valid quiet=True, that=['_list_hooks is not string', '_list_hooks is not mapping', '_list_hooks is iterable', '(hooks | default([])) is not string', '(hooks | default([])) is not mapping', '(hooks | default([])) is iterable']] *** 2025-12-13 06:55:32,616 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:55:32 +0000 (0:00:00.027) 0:02:28.766 ***** 2025-12-13 06:55:32,616 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:55:32 +0000 (0:00:00.027) 0:02:28.764 ***** 2025-12-13 06:55:32,664 p=31853 u=zuul n=ansible | ok: [localhost] 2025-12-13 06:55:32,672 p=31853 u=zuul n=ansible | TASK [run_hook : Assert single hooks are all mappings quiet=True, that=['_not_mapping_hooks | length == 0'], msg=All single hooks must be a list of mappings or a mapping.] *** 2025-12-13 06:55:32,672 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:55:32 +0000 (0:00:00.055) 0:02:28.821 ***** 2025-12-13 06:55:32,672 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:55:32 +0000 (0:00:00.055) 0:02:28.820 ***** 2025-12-13 06:55:32,741 p=31853 u=zuul n=ansible | ok: [localhost] 2025-12-13 06:55:32,749 p=31853 u=zuul n=ansible | TASK [run_hook : Loop on hooks for post_infra _raw_params={{ hook.type }}.yml] *** 2025-12-13 06:55:32,749 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:55:32 +0000 (0:00:00.077) 0:02:28.899 ***** 2025-12-13 06:55:32,749 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:55:32 +0000 (0:00:00.077) 0:02:28.897 ***** 2025-12-13 06:55:32,837 p=31853 u=zuul n=ansible | included: /home/zuul/src/github.com/openstack-k8s-operators/ci-framework/roles/run_hook/tasks/playbook.yml for localhost => (item={'name': 'Fetch nodes facts and save them as parameters', 'type': 'playbook', 'inventory': '/home/zuul/ci-framework-data/artifacts/zuul_inventory.yml', 'source': 'fetch_compute_facts.yml'}) 2025-12-13 06:55:32,848 p=31853 u=zuul n=ansible | TASK [run_hook : Set playbook path for Fetch nodes facts and save them as parameters cifmw_basedir={{ _bdir }}, hook_name={{ _hook_name }}, playbook_path={{ _play | realpath }}, log_path={{ _bdir }}/logs/{{ step }}_{{ _hook_name }}.log, extra_vars=-e operator_namespace={{ _operator_namespace }} -e namespace={{ _namespace}} {%- if hook.extra_vars is defined and hook.extra_vars|length > 0 -%} {% for key,value in hook.extra_vars.items() -%} {%- if key == 'file' %} -e "@{{ value }}" {%- else %} -e "{{ key }}={{ value }}" {%- endif %} {%- endfor %} {%- endif %}] *** 2025-12-13 06:55:32,848 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:55:32 +0000 (0:00:00.099) 0:02:28.998 ***** 2025-12-13 06:55:32,848 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:55:32 +0000 (0:00:00.099) 0:02:28.997 ***** 2025-12-13 06:55:32,887 p=31853 u=zuul n=ansible | ok: [localhost] 2025-12-13 06:55:32,894 p=31853 u=zuul n=ansible | TASK [run_hook : Get file stat path={{ playbook_path }}] *********************** 2025-12-13 06:55:32,894 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:55:32 +0000 (0:00:00.045) 0:02:29.044 ***** 2025-12-13 06:55:32,894 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:55:32 +0000 (0:00:00.045) 0:02:29.042 ***** 2025-12-13 06:55:33,058 p=31853 u=zuul n=ansible | ok: [localhost] 2025-12-13 06:55:33,065 p=31853 u=zuul n=ansible | TASK [run_hook : Fail if playbook doesn't exist msg=Playbook {{ playbook_path }} doesn't seem to exist.] *** 2025-12-13 06:55:33,066 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:55:33 +0000 (0:00:00.171) 0:02:29.215 ***** 2025-12-13 06:55:33,066 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:55:33 +0000 (0:00:00.171) 0:02:29.214 ***** 2025-12-13 06:55:33,080 p=31853 u=zuul n=ansible | skipping: [localhost] 2025-12-13 06:55:33,088 p=31853 u=zuul n=ansible | TASK [run_hook : Get parameters files paths={{ (cifmw_basedir, 'artifacts/parameters') | path_join }}, file_type=file, patterns=*.yml] *** 2025-12-13 06:55:33,088 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:55:33 +0000 (0:00:00.022) 0:02:29.238 ***** 2025-12-13 06:55:33,088 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:55:33 +0000 (0:00:00.022) 0:02:29.236 ***** 2025-12-13 06:55:33,241 p=31853 u=zuul n=ansible | ok: [localhost] 2025-12-13 06:55:33,249 p=31853 u=zuul n=ansible | TASK [run_hook : Add parameters artifacts as extra variables extra_vars={{ extra_vars }} {% for file in cifmw_run_hook_parameters_files.files %} -e "@{{ file.path }}" {%- endfor %}] *** 2025-12-13 06:55:33,249 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:55:33 +0000 (0:00:00.160) 0:02:29.399 ***** 2025-12-13 06:55:33,249 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:55:33 +0000 (0:00:00.161) 0:02:29.398 ***** 2025-12-13 06:55:33,265 p=31853 u=zuul n=ansible | ok: [localhost] 2025-12-13 06:55:33,273 p=31853 u=zuul n=ansible | TASK [run_hook : Ensure log directory exists path={{ log_path | dirname }}, state=directory, mode=0755] *** 2025-12-13 06:55:33,273 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:55:33 +0000 (0:00:00.023) 0:02:29.422 ***** 2025-12-13 06:55:33,273 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:55:33 +0000 (0:00:00.023) 0:02:29.421 ***** 2025-12-13 06:55:33,428 p=31853 u=zuul n=ansible | ok: [localhost] 2025-12-13 06:55:33,435 p=31853 u=zuul n=ansible | TASK [run_hook : Ensure artifacts directory exists path={{ cifmw_basedir }}/artifacts, state=directory, mode=0755] *** 2025-12-13 06:55:33,435 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:55:33 +0000 (0:00:00.162) 0:02:29.585 ***** 2025-12-13 06:55:33,435 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:55:33 +0000 (0:00:00.162) 0:02:29.583 ***** 2025-12-13 06:55:33,592 p=31853 u=zuul n=ansible | ok: [localhost] 2025-12-13 06:55:33,600 p=31853 u=zuul n=ansible | TASK [run_hook : Run hook without retry - Fetch nodes facts and save them as parameters] *** 2025-12-13 06:55:33,600 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:55:33 +0000 (0:00:00.165) 0:02:29.750 ***** 2025-12-13 06:55:33,600 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:55:33 +0000 (0:00:00.165) 0:02:29.749 ***** 2025-12-13 06:55:33,646 p=31853 u=zuul n=ansible | Follow script's output here: /home/zuul/ci-framework-data/logs/ci_script_002_run_hook_without_retry_fetch.log 2025-12-13 06:55:41,399 p=31853 u=zuul n=ansible | changed: [localhost] 2025-12-13 06:55:41,406 p=31853 u=zuul n=ansible | TASK [run_hook : Run hook with retry - Fetch nodes facts and save them as parameters] *** 2025-12-13 06:55:41,406 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:55:41 +0000 (0:00:07.805) 0:02:37.556 ***** 2025-12-13 06:55:41,406 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:55:41 +0000 (0:00:07.805) 0:02:37.555 ***** 2025-12-13 06:55:41,420 p=31853 u=zuul n=ansible | skipping: [localhost] 2025-12-13 06:55:41,427 p=31853 u=zuul n=ansible | TASK [run_hook : Check if we have a file path={{ cifmw_basedir }}/artifacts/{{ step }}_{{ hook_name }}.yml] *** 2025-12-13 06:55:41,428 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:55:41 +0000 (0:00:00.021) 0:02:37.577 ***** 2025-12-13 06:55:41,428 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:55:41 +0000 (0:00:00.021) 0:02:37.576 ***** 2025-12-13 06:55:41,601 p=31853 u=zuul n=ansible | ok: [localhost] 2025-12-13 06:55:41,608 p=31853 u=zuul n=ansible | TASK [run_hook : Load generated content in main playbook file={{ cifmw_basedir }}/artifacts/{{ step }}_{{ hook_name }}.yml] *** 2025-12-13 06:55:41,608 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:55:41 +0000 (0:00:00.180) 0:02:37.758 ***** 2025-12-13 06:55:41,608 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:55:41 +0000 (0:00:00.180) 0:02:37.757 ***** 2025-12-13 06:55:41,628 p=31853 u=zuul n=ansible | ok: [localhost] 2025-12-13 06:55:41,645 p=31853 u=zuul n=ansible | TASK [run_hook : Assert parameters are valid quiet=True, that=['_list_hooks is not string', '_list_hooks is not mapping', '_list_hooks is iterable', '(hooks | default([])) is not string', '(hooks | default([])) is not mapping', '(hooks | default([])) is iterable']] *** 2025-12-13 06:55:41,646 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:55:41 +0000 (0:00:00.037) 0:02:37.795 ***** 2025-12-13 06:55:41,646 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:55:41 +0000 (0:00:00.037) 0:02:37.794 ***** 2025-12-13 06:55:41,692 p=31853 u=zuul n=ansible | ok: [localhost] 2025-12-13 06:55:41,699 p=31853 u=zuul n=ansible | TASK [run_hook : Assert single hooks are all mappings quiet=True, that=['_not_mapping_hooks | length == 0'], msg=All single hooks must be a list of mappings or a mapping.] *** 2025-12-13 06:55:41,699 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:55:41 +0000 (0:00:00.053) 0:02:37.849 ***** 2025-12-13 06:55:41,699 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:55:41 +0000 (0:00:00.053) 0:02:37.847 ***** 2025-12-13 06:55:41,767 p=31853 u=zuul n=ansible | ok: [localhost] 2025-12-13 06:55:41,775 p=31853 u=zuul n=ansible | TASK [run_hook : Loop on hooks for pre_package_build _raw_params={{ hook.type }}.yml] *** 2025-12-13 06:55:41,775 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:55:41 +0000 (0:00:00.075) 0:02:37.924 ***** 2025-12-13 06:55:41,775 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:55:41 +0000 (0:00:00.075) 0:02:37.923 ***** 2025-12-13 06:55:41,843 p=31853 u=zuul n=ansible | skipping: [localhost] 2025-12-13 06:55:41,856 p=31853 u=zuul n=ansible | TASK [cifmw_setup : Load parameters files dir={{ cifmw_basedir }}/artifacts/parameters] *** 2025-12-13 06:55:41,856 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:55:41 +0000 (0:00:00.081) 0:02:38.006 ***** 2025-12-13 06:55:41,856 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:55:41 +0000 (0:00:00.081) 0:02:38.004 ***** 2025-12-13 06:55:41,931 p=31853 u=zuul n=ansible | ok: [localhost] 2025-12-13 06:55:41,940 p=31853 u=zuul n=ansible | TASK [pkg_build : Generate volume list build_volumes={% for pkg in cifmw_pkg_build_list -%} - "{{ pkg.src|default(cifmw_pkg_build_pkg_basedir ~ '/' ~ pkg.name) }}:/root/src/{{ pkg.name }}:z" - "{{ cifmw_pkg_build_basedir }}/volumes/packages/{{ pkg.name }}:/root/{{ pkg.name }}:z" - "{{ cifmw_pkg_build_basedir }}/logs/build_{{ pkg.name }}:/root/logs:z" {% endfor -%} - "{{ cifmw_pkg_build_basedir }}/volumes/packages/gating_repo:/root/gating_repo:z" - "{{ cifmw_pkg_build_basedir }}/artifacts/repositories:/root/yum.repos.d:z,ro" - "{{ cifmw_pkg_build_basedir }}/artifacts/build-packages.yml:/root/playbook.yml:z,ro" ] *** 2025-12-13 06:55:41,940 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:55:41 +0000 (0:00:00.084) 0:02:38.090 ***** 2025-12-13 06:55:41,941 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:55:41 +0000 (0:00:00.084) 0:02:38.089 ***** 2025-12-13 06:55:41,961 p=31853 u=zuul n=ansible | skipping: [localhost] 2025-12-13 06:55:41,970 p=31853 u=zuul n=ansible | TASK [pkg_build : Build package using container name={{ pkg.name }}-builder, auto_remove=True, detach=False, privileged=True, log_driver=k8s-file, log_level=info, log_opt={'path': '{{ cifmw_pkg_build_basedir }}/logs/{{ pkg.name }}-builder.log'}, image={{ cifmw_pkg_build_ctx_name }}, volume={{ build_volumes | from_yaml }}, security_opt=['label=disable', 'seccomp=unconfined', 'apparmor=unconfined'], env={'PROJECT': '{{ pkg.name }}'}, command=ansible-playbook -i localhost, -c local playbook.yml] *** 2025-12-13 06:55:41,970 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:55:41 +0000 (0:00:00.029) 0:02:38.119 ***** 2025-12-13 06:55:41,970 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:55:41 +0000 (0:00:00.029) 0:02:38.118 ***** 2025-12-13 06:55:41,982 p=31853 u=zuul n=ansible | skipping: [localhost] 2025-12-13 06:55:41,994 p=31853 u=zuul n=ansible | TASK [run_hook : Assert parameters are valid quiet=True, that=['_list_hooks is not string', '_list_hooks is not mapping', '_list_hooks is iterable', '(hooks | default([])) is not string', '(hooks | default([])) is not mapping', '(hooks | default([])) is iterable']] *** 2025-12-13 06:55:41,994 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:55:41 +0000 (0:00:00.024) 0:02:38.144 ***** 2025-12-13 06:55:41,994 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:55:41 +0000 (0:00:00.024) 0:02:38.142 ***** 2025-12-13 06:55:42,040 p=31853 u=zuul n=ansible | ok: [localhost] 2025-12-13 06:55:42,047 p=31853 u=zuul n=ansible | TASK [run_hook : Assert single hooks are all mappings quiet=True, that=['_not_mapping_hooks | length == 0'], msg=All single hooks must be a list of mappings or a mapping.] *** 2025-12-13 06:55:42,048 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:55:42 +0000 (0:00:00.054) 0:02:38.198 ***** 2025-12-13 06:55:42,048 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:55:42 +0000 (0:00:00.054) 0:02:38.197 ***** 2025-12-13 06:55:42,149 p=31853 u=zuul n=ansible | ok: [localhost] 2025-12-13 06:55:42,157 p=31853 u=zuul n=ansible | TASK [run_hook : Loop on hooks for post_package_build _raw_params={{ hook.type }}.yml] *** 2025-12-13 06:55:42,157 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:55:42 +0000 (0:00:00.108) 0:02:38.307 ***** 2025-12-13 06:55:42,157 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:55:42 +0000 (0:00:00.108) 0:02:38.306 ***** 2025-12-13 06:55:42,256 p=31853 u=zuul n=ansible | skipping: [localhost] 2025-12-13 06:55:42,274 p=31853 u=zuul n=ansible | TASK [run_hook : Assert parameters are valid quiet=True, that=['_list_hooks is not string', '_list_hooks is not mapping', '_list_hooks is iterable', '(hooks | default([])) is not string', '(hooks | default([])) is not mapping', '(hooks | default([])) is iterable']] *** 2025-12-13 06:55:42,274 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:55:42 +0000 (0:00:00.116) 0:02:38.424 ***** 2025-12-13 06:55:42,274 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:55:42 +0000 (0:00:00.116) 0:02:38.422 ***** 2025-12-13 06:55:42,319 p=31853 u=zuul n=ansible | ok: [localhost] 2025-12-13 06:55:42,327 p=31853 u=zuul n=ansible | TASK [run_hook : Assert single hooks are all mappings quiet=True, that=['_not_mapping_hooks | length == 0'], msg=All single hooks must be a list of mappings or a mapping.] *** 2025-12-13 06:55:42,327 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:55:42 +0000 (0:00:00.052) 0:02:38.476 ***** 2025-12-13 06:55:42,327 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:55:42 +0000 (0:00:00.052) 0:02:38.475 ***** 2025-12-13 06:55:42,428 p=31853 u=zuul n=ansible | ok: [localhost] 2025-12-13 06:55:42,436 p=31853 u=zuul n=ansible | TASK [run_hook : Loop on hooks for pre_container_build _raw_params={{ hook.type }}.yml] *** 2025-12-13 06:55:42,436 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:55:42 +0000 (0:00:00.109) 0:02:38.586 ***** 2025-12-13 06:55:42,436 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:55:42 +0000 (0:00:00.109) 0:02:38.584 ***** 2025-12-13 06:55:42,535 p=31853 u=zuul n=ansible | skipping: [localhost] 2025-12-13 06:55:42,548 p=31853 u=zuul n=ansible | TASK [cifmw_setup : Load parameters files dir={{ cifmw_basedir }}/artifacts/parameters] *** 2025-12-13 06:55:42,548 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:55:42 +0000 (0:00:00.112) 0:02:38.698 ***** 2025-12-13 06:55:42,548 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:55:42 +0000 (0:00:00.112) 0:02:38.697 ***** 2025-12-13 06:55:42,618 p=31853 u=zuul n=ansible | ok: [localhost] 2025-12-13 06:55:42,626 p=31853 u=zuul n=ansible | TASK [cifmw_setup : Nothing to do yet msg=No support for that step yet] ******** 2025-12-13 06:55:42,626 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:55:42 +0000 (0:00:00.077) 0:02:38.776 ***** 2025-12-13 06:55:42,626 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:55:42 +0000 (0:00:00.077) 0:02:38.775 ***** 2025-12-13 06:55:42,641 p=31853 u=zuul n=ansible | ok: [localhost] => msg: No support for that step yet 2025-12-13 06:55:42,649 p=31853 u=zuul n=ansible | TASK [run_hook : Assert parameters are valid quiet=True, that=['_list_hooks is not string', '_list_hooks is not mapping', '_list_hooks is iterable', '(hooks | default([])) is not string', '(hooks | default([])) is not mapping', '(hooks | default([])) is iterable']] *** 2025-12-13 06:55:42,649 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:55:42 +0000 (0:00:00.022) 0:02:38.798 ***** 2025-12-13 06:55:42,649 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:55:42 +0000 (0:00:00.022) 0:02:38.797 ***** 2025-12-13 06:55:42,695 p=31853 u=zuul n=ansible | ok: [localhost] 2025-12-13 06:55:42,703 p=31853 u=zuul n=ansible | TASK [run_hook : Assert single hooks are all mappings quiet=True, that=['_not_mapping_hooks | length == 0'], msg=All single hooks must be a list of mappings or a mapping.] *** 2025-12-13 06:55:42,703 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:55:42 +0000 (0:00:00.054) 0:02:38.853 ***** 2025-12-13 06:55:42,703 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:55:42 +0000 (0:00:00.054) 0:02:38.852 ***** 2025-12-13 06:55:42,772 p=31853 u=zuul n=ansible | ok: [localhost] 2025-12-13 06:55:42,781 p=31853 u=zuul n=ansible | TASK [run_hook : Loop on hooks for post_container_build _raw_params={{ hook.type }}.yml] *** 2025-12-13 06:55:42,781 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:55:42 +0000 (0:00:00.077) 0:02:38.930 ***** 2025-12-13 06:55:42,781 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:55:42 +0000 (0:00:00.077) 0:02:38.929 ***** 2025-12-13 06:55:42,848 p=31853 u=zuul n=ansible | skipping: [localhost] 2025-12-13 06:55:42,866 p=31853 u=zuul n=ansible | TASK [run_hook : Assert parameters are valid quiet=True, that=['_list_hooks is not string', '_list_hooks is not mapping', '_list_hooks is iterable', '(hooks | default([])) is not string', '(hooks | default([])) is not mapping', '(hooks | default([])) is iterable']] *** 2025-12-13 06:55:42,867 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:55:42 +0000 (0:00:00.085) 0:02:39.016 ***** 2025-12-13 06:55:42,867 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:55:42 +0000 (0:00:00.085) 0:02:39.015 ***** 2025-12-13 06:55:42,914 p=31853 u=zuul n=ansible | ok: [localhost] 2025-12-13 06:55:42,923 p=31853 u=zuul n=ansible | TASK [run_hook : Assert single hooks are all mappings quiet=True, that=['_not_mapping_hooks | length == 0'], msg=All single hooks must be a list of mappings or a mapping.] *** 2025-12-13 06:55:42,923 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:55:42 +0000 (0:00:00.056) 0:02:39.073 ***** 2025-12-13 06:55:42,923 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:55:42 +0000 (0:00:00.056) 0:02:39.071 ***** 2025-12-13 06:55:42,992 p=31853 u=zuul n=ansible | ok: [localhost] 2025-12-13 06:55:43,000 p=31853 u=zuul n=ansible | TASK [run_hook : Loop on hooks for pre_operator_build _raw_params={{ hook.type }}.yml] *** 2025-12-13 06:55:43,000 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:55:42 +0000 (0:00:00.076) 0:02:39.150 ***** 2025-12-13 06:55:43,000 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:55:42 +0000 (0:00:00.076) 0:02:39.148 ***** 2025-12-13 06:55:43,067 p=31853 u=zuul n=ansible | skipping: [localhost] 2025-12-13 06:55:43,080 p=31853 u=zuul n=ansible | TASK [cifmw_setup : Load parameters files dir={{ cifmw_basedir }}/artifacts/parameters] *** 2025-12-13 06:55:43,080 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:55:43 +0000 (0:00:00.080) 0:02:39.230 ***** 2025-12-13 06:55:43,080 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:55:43 +0000 (0:00:00.080) 0:02:39.228 ***** 2025-12-13 06:55:43,123 p=31853 u=zuul n=ansible | ok: [localhost] 2025-12-13 06:55:43,132 p=31853 u=zuul n=ansible | TASK [operator_build : Ensure mandatory directories exist path={{ cifmw_operator_build_basedir }}/{{ item }}, state=directory, mode=0755] *** 2025-12-13 06:55:43,132 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:55:43 +0000 (0:00:00.051) 0:02:39.281 ***** 2025-12-13 06:55:43,132 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:55:43 +0000 (0:00:00.051) 0:02:39.280 ***** 2025-12-13 06:55:43,154 p=31853 u=zuul n=ansible | skipping: [localhost] => (item=artifacts) 2025-12-13 06:55:43,159 p=31853 u=zuul n=ansible | skipping: [localhost] => (item=logs) 2025-12-13 06:55:43,160 p=31853 u=zuul n=ansible | skipping: [localhost] 2025-12-13 06:55:43,168 p=31853 u=zuul n=ansible | TASK [operator_build : Initialize role output cifmw_operator_build_output={{ cifmw_operator_build_output }}, cifmw_operator_build_meta_name={{ cifmw_operator_build_meta_name }}] *** 2025-12-13 06:55:43,168 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:55:43 +0000 (0:00:00.036) 0:02:39.317 ***** 2025-12-13 06:55:43,168 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:55:43 +0000 (0:00:00.036) 0:02:39.316 ***** 2025-12-13 06:55:43,188 p=31853 u=zuul n=ansible | skipping: [localhost] 2025-12-13 06:55:43,195 p=31853 u=zuul n=ansible | TASK [operator_build : Populate operators list with zuul info _raw_params=zuul_info.yml] *** 2025-12-13 06:55:43,195 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:55:43 +0000 (0:00:00.027) 0:02:39.345 ***** 2025-12-13 06:55:43,195 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:55:43 +0000 (0:00:00.027) 0:02:39.344 ***** 2025-12-13 06:55:43,218 p=31853 u=zuul n=ansible | skipping: [localhost] => (item={'branch': 'main', 'change': '379', 'change_url': 'https://github.com/openstack-k8s-operators/test-operator/pull/379', 'commit_id': 'd19f803f400b92d4afd97dd749e753a7435bfaca', 'patchset': 'd19f803f400b92d4afd97dd749e753a7435bfaca', 'project': {'canonical_hostname': 'github.com', 'canonical_name': 'github.com/openstack-k8s-operators/test-operator', 'name': 'openstack-k8s-operators/test-operator', 'short_name': 'test-operator', 'src_dir': 'src/github.com/openstack-k8s-operators/test-operator'}, 'topic': None}) 2025-12-13 06:55:43,220 p=31853 u=zuul n=ansible | skipping: [localhost] 2025-12-13 06:55:43,227 p=31853 u=zuul n=ansible | TASK [operator_build : Merge lists of operators operators_list={{ [cifmw_operator_build_operators, zuul_info_operators | default([])] | community.general.lists_mergeby('name') }}] *** 2025-12-13 06:55:43,227 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:55:43 +0000 (0:00:00.031) 0:02:39.376 ***** 2025-12-13 06:55:43,227 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:55:43 +0000 (0:00:00.031) 0:02:39.375 ***** 2025-12-13 06:55:43,246 p=31853 u=zuul n=ansible | skipping: [localhost] 2025-12-13 06:55:43,254 p=31853 u=zuul n=ansible | TASK [operator_build : Get meta_operator src dir from operators_list cifmw_operator_build_meta_src={{ (operators_list | selectattr('name', 'eq', cifmw_operator_build_meta_name) | map(attribute='src') | first ) | default(cifmw_operator_build_meta_src, true) }}] *** 2025-12-13 06:55:43,254 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:55:43 +0000 (0:00:00.027) 0:02:39.404 ***** 2025-12-13 06:55:43,254 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:55:43 +0000 (0:00:00.027) 0:02:39.402 ***** 2025-12-13 06:55:43,274 p=31853 u=zuul n=ansible | skipping: [localhost] 2025-12-13 06:55:43,282 p=31853 u=zuul n=ansible | TASK [operator_build : Adds meta-operator to the list operators_list={{ [operators_list, meta_operator_info] | community.general.lists_mergeby('name') }}] *** 2025-12-13 06:55:43,282 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:55:43 +0000 (0:00:00.028) 0:02:39.432 ***** 2025-12-13 06:55:43,282 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:55:43 +0000 (0:00:00.028) 0:02:39.431 ***** 2025-12-13 06:55:43,304 p=31853 u=zuul n=ansible | skipping: [localhost] 2025-12-13 06:55:43,311 p=31853 u=zuul n=ansible | TASK [operator_build : Clone operator's code when src dir is empty _raw_params=clone.yml] *** 2025-12-13 06:55:43,311 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:55:43 +0000 (0:00:00.028) 0:02:39.460 ***** 2025-12-13 06:55:43,311 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:55:43 +0000 (0:00:00.028) 0:02:39.459 ***** 2025-12-13 06:55:43,331 p=31853 u=zuul n=ansible | skipping: [localhost] 2025-12-13 06:55:43,339 p=31853 u=zuul n=ansible | TASK [operator_build : Building operators _raw_params=build.yml] *************** 2025-12-13 06:55:43,339 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:55:43 +0000 (0:00:00.028) 0:02:39.488 ***** 2025-12-13 06:55:43,339 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:55:43 +0000 (0:00:00.028) 0:02:39.487 ***** 2025-12-13 06:55:43,360 p=31853 u=zuul n=ansible | skipping: [localhost] 2025-12-13 06:55:43,367 p=31853 u=zuul n=ansible | TASK [operator_build : Building meta operator _raw_params=build.yml] *********** 2025-12-13 06:55:43,367 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:55:43 +0000 (0:00:00.028) 0:02:39.517 ***** 2025-12-13 06:55:43,367 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:55:43 +0000 (0:00:00.028) 0:02:39.516 ***** 2025-12-13 06:55:43,388 p=31853 u=zuul n=ansible | skipping: [localhost] 2025-12-13 06:55:43,395 p=31853 u=zuul n=ansible | TASK [operator_build : Gather role output dest={{ cifmw_operator_build_basedir }}/artifacts/custom-operators.yml, content={{ cifmw_operator_build_output | to_nice_yaml }}, mode=0644] *** 2025-12-13 06:55:43,395 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:55:43 +0000 (0:00:00.027) 0:02:39.545 ***** 2025-12-13 06:55:43,395 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:55:43 +0000 (0:00:00.027) 0:02:39.543 ***** 2025-12-13 06:55:43,415 p=31853 u=zuul n=ansible | skipping: [localhost] 2025-12-13 06:55:43,427 p=31853 u=zuul n=ansible | TASK [run_hook : Assert parameters are valid quiet=True, that=['_list_hooks is not string', '_list_hooks is not mapping', '_list_hooks is iterable', '(hooks | default([])) is not string', '(hooks | default([])) is not mapping', '(hooks | default([])) is iterable']] *** 2025-12-13 06:55:43,428 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:55:43 +0000 (0:00:00.032) 0:02:39.577 ***** 2025-12-13 06:55:43,428 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:55:43 +0000 (0:00:00.032) 0:02:39.576 ***** 2025-12-13 06:55:43,474 p=31853 u=zuul n=ansible | ok: [localhost] 2025-12-13 06:55:43,481 p=31853 u=zuul n=ansible | TASK [run_hook : Assert single hooks are all mappings quiet=True, that=['_not_mapping_hooks | length == 0'], msg=All single hooks must be a list of mappings or a mapping.] *** 2025-12-13 06:55:43,481 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:55:43 +0000 (0:00:00.053) 0:02:39.631 ***** 2025-12-13 06:55:43,481 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:55:43 +0000 (0:00:00.053) 0:02:39.630 ***** 2025-12-13 06:55:43,552 p=31853 u=zuul n=ansible | ok: [localhost] 2025-12-13 06:55:43,562 p=31853 u=zuul n=ansible | TASK [run_hook : Loop on hooks for post_operator_build _raw_params={{ hook.type }}.yml] *** 2025-12-13 06:55:43,562 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:55:43 +0000 (0:00:00.080) 0:02:39.711 ***** 2025-12-13 06:55:43,562 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:55:43 +0000 (0:00:00.080) 0:02:39.710 ***** 2025-12-13 06:55:43,630 p=31853 u=zuul n=ansible | skipping: [localhost] 2025-12-13 06:55:43,648 p=31853 u=zuul n=ansible | TASK [run_hook : Assert parameters are valid quiet=True, that=['_list_hooks is not string', '_list_hooks is not mapping', '_list_hooks is iterable', '(hooks | default([])) is not string', '(hooks | default([])) is not mapping', '(hooks | default([])) is iterable']] *** 2025-12-13 06:55:43,648 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:55:43 +0000 (0:00:00.086) 0:02:39.798 ***** 2025-12-13 06:55:43,648 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:55:43 +0000 (0:00:00.086) 0:02:39.797 ***** 2025-12-13 06:55:43,699 p=31853 u=zuul n=ansible | ok: [localhost] 2025-12-13 06:55:43,706 p=31853 u=zuul n=ansible | TASK [run_hook : Assert single hooks are all mappings quiet=True, that=['_not_mapping_hooks | length == 0'], msg=All single hooks must be a list of mappings or a mapping.] *** 2025-12-13 06:55:43,706 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:55:43 +0000 (0:00:00.057) 0:02:39.856 ***** 2025-12-13 06:55:43,706 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:55:43 +0000 (0:00:00.057) 0:02:39.854 ***** 2025-12-13 06:55:43,778 p=31853 u=zuul n=ansible | ok: [localhost] 2025-12-13 06:55:43,786 p=31853 u=zuul n=ansible | TASK [run_hook : Loop on hooks for pre_deploy _raw_params={{ hook.type }}.yml] *** 2025-12-13 06:55:43,786 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:55:43 +0000 (0:00:00.080) 0:02:39.936 ***** 2025-12-13 06:55:43,787 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:55:43 +0000 (0:00:00.080) 0:02:39.935 ***** 2025-12-13 06:55:43,883 p=31853 u=zuul n=ansible | included: /home/zuul/src/github.com/openstack-k8s-operators/ci-framework/roles/run_hook/tasks/playbook.yml for localhost => (item={'name': '61 HCI pre deploy kustomizations', 'source': 'control_plane_hci_pre_deploy.yml', 'type': 'playbook'}) 2025-12-13 06:55:43,891 p=31853 u=zuul n=ansible | included: /home/zuul/src/github.com/openstack-k8s-operators/ci-framework/roles/run_hook/tasks/playbook.yml for localhost => (item={'name': '80 Kustomize OpenStack CR', 'source': 'control_plane_horizon.yml', 'type': 'playbook'}) 2025-12-13 06:55:43,903 p=31853 u=zuul n=ansible | TASK [run_hook : Set playbook path for 61 HCI pre deploy kustomizations cifmw_basedir={{ _bdir }}, hook_name={{ _hook_name }}, playbook_path={{ _play | realpath }}, log_path={{ _bdir }}/logs/{{ step }}_{{ _hook_name }}.log, extra_vars=-e operator_namespace={{ _operator_namespace }} -e namespace={{ _namespace}} {%- if hook.extra_vars is defined and hook.extra_vars|length > 0 -%} {% for key,value in hook.extra_vars.items() -%} {%- if key == 'file' %} -e "@{{ value }}" {%- else %} -e "{{ key }}={{ value }}" {%- endif %} {%- endfor %} {%- endif %}] *** 2025-12-13 06:55:43,903 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:55:43 +0000 (0:00:00.116) 0:02:40.052 ***** 2025-12-13 06:55:43,903 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:55:43 +0000 (0:00:00.116) 0:02:40.051 ***** 2025-12-13 06:55:43,943 p=31853 u=zuul n=ansible | ok: [localhost] 2025-12-13 06:55:43,951 p=31853 u=zuul n=ansible | TASK [run_hook : Get file stat path={{ playbook_path }}] *********************** 2025-12-13 06:55:43,952 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:55:43 +0000 (0:00:00.048) 0:02:40.101 ***** 2025-12-13 06:55:43,952 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:55:43 +0000 (0:00:00.048) 0:02:40.100 ***** 2025-12-13 06:55:44,125 p=31853 u=zuul n=ansible | ok: [localhost] 2025-12-13 06:55:44,133 p=31853 u=zuul n=ansible | TASK [run_hook : Fail if playbook doesn't exist msg=Playbook {{ playbook_path }} doesn't seem to exist.] *** 2025-12-13 06:55:44,133 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:55:44 +0000 (0:00:00.181) 0:02:40.283 ***** 2025-12-13 06:55:44,133 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:55:44 +0000 (0:00:00.181) 0:02:40.281 ***** 2025-12-13 06:55:44,152 p=31853 u=zuul n=ansible | skipping: [localhost] 2025-12-13 06:55:44,159 p=31853 u=zuul n=ansible | TASK [run_hook : Get parameters files paths={{ (cifmw_basedir, 'artifacts/parameters') | path_join }}, file_type=file, patterns=*.yml] *** 2025-12-13 06:55:44,160 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:55:44 +0000 (0:00:00.026) 0:02:40.309 ***** 2025-12-13 06:55:44,160 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:55:44 +0000 (0:00:00.026) 0:02:40.308 ***** 2025-12-13 06:55:44,320 p=31853 u=zuul n=ansible | ok: [localhost] 2025-12-13 06:55:44,328 p=31853 u=zuul n=ansible | TASK [run_hook : Add parameters artifacts as extra variables extra_vars={{ extra_vars }} {% for file in cifmw_run_hook_parameters_files.files %} -e "@{{ file.path }}" {%- endfor %}] *** 2025-12-13 06:55:44,328 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:55:44 +0000 (0:00:00.168) 0:02:40.478 ***** 2025-12-13 06:55:44,328 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:55:44 +0000 (0:00:00.168) 0:02:40.477 ***** 2025-12-13 06:55:44,349 p=31853 u=zuul n=ansible | ok: [localhost] 2025-12-13 06:55:44,357 p=31853 u=zuul n=ansible | TASK [run_hook : Ensure log directory exists path={{ log_path | dirname }}, state=directory, mode=0755] *** 2025-12-13 06:55:44,357 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:55:44 +0000 (0:00:00.028) 0:02:40.507 ***** 2025-12-13 06:55:44,357 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:55:44 +0000 (0:00:00.028) 0:02:40.505 ***** 2025-12-13 06:55:44,537 p=31853 u=zuul n=ansible | ok: [localhost] 2025-12-13 06:55:44,544 p=31853 u=zuul n=ansible | TASK [run_hook : Ensure artifacts directory exists path={{ cifmw_basedir }}/artifacts, state=directory, mode=0755] *** 2025-12-13 06:55:44,545 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:55:44 +0000 (0:00:00.187) 0:02:40.694 ***** 2025-12-13 06:55:44,545 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:55:44 +0000 (0:00:00.187) 0:02:40.693 ***** 2025-12-13 06:55:44,706 p=31853 u=zuul n=ansible | ok: [localhost] 2025-12-13 06:55:44,715 p=31853 u=zuul n=ansible | TASK [run_hook : Run hook without retry - 61 HCI pre deploy kustomizations] **** 2025-12-13 06:55:44,715 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:55:44 +0000 (0:00:00.170) 0:02:40.864 ***** 2025-12-13 06:55:44,715 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:55:44 +0000 (0:00:00.170) 0:02:40.863 ***** 2025-12-13 06:55:44,764 p=31853 u=zuul n=ansible | Follow script's output here: /home/zuul/ci-framework-data/logs/ci_script_003_run_hook_without_retry_61_hci.log 2025-12-13 06:55:46,234 p=31853 u=zuul n=ansible | changed: [localhost] 2025-12-13 06:55:46,243 p=31853 u=zuul n=ansible | TASK [run_hook : Run hook with retry - 61 HCI pre deploy kustomizations] ******* 2025-12-13 06:55:46,243 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:55:46 +0000 (0:00:01.528) 0:02:42.392 ***** 2025-12-13 06:55:46,243 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:55:46 +0000 (0:00:01.528) 0:02:42.391 ***** 2025-12-13 06:55:46,265 p=31853 u=zuul n=ansible | skipping: [localhost] 2025-12-13 06:55:46,272 p=31853 u=zuul n=ansible | TASK [run_hook : Check if we have a file path={{ cifmw_basedir }}/artifacts/{{ step }}_{{ hook_name }}.yml] *** 2025-12-13 06:55:46,272 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:55:46 +0000 (0:00:00.029) 0:02:42.422 ***** 2025-12-13 06:55:46,273 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:55:46 +0000 (0:00:00.029) 0:02:42.421 ***** 2025-12-13 06:55:46,427 p=31853 u=zuul n=ansible | ok: [localhost] 2025-12-13 06:55:46,435 p=31853 u=zuul n=ansible | TASK [run_hook : Load generated content in main playbook file={{ cifmw_basedir }}/artifacts/{{ step }}_{{ hook_name }}.yml] *** 2025-12-13 06:55:46,435 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:55:46 +0000 (0:00:00.162) 0:02:42.584 ***** 2025-12-13 06:55:46,435 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:55:46 +0000 (0:00:00.162) 0:02:42.583 ***** 2025-12-13 06:55:46,453 p=31853 u=zuul n=ansible | skipping: [localhost] 2025-12-13 06:55:46,462 p=31853 u=zuul n=ansible | TASK [run_hook : Set playbook path for 80 Kustomize OpenStack CR cifmw_basedir={{ _bdir }}, hook_name={{ _hook_name }}, playbook_path={{ _play | realpath }}, log_path={{ _bdir }}/logs/{{ step }}_{{ _hook_name }}.log, extra_vars=-e operator_namespace={{ _operator_namespace }} -e namespace={{ _namespace}} {%- if hook.extra_vars is defined and hook.extra_vars|length > 0 -%} {% for key,value in hook.extra_vars.items() -%} {%- if key == 'file' %} -e "@{{ value }}" {%- else %} -e "{{ key }}={{ value }}" {%- endif %} {%- endfor %} {%- endif %}] *** 2025-12-13 06:55:46,462 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:55:46 +0000 (0:00:00.027) 0:02:42.612 ***** 2025-12-13 06:55:46,462 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:55:46 +0000 (0:00:00.027) 0:02:42.611 ***** 2025-12-13 06:55:46,503 p=31853 u=zuul n=ansible | ok: [localhost] 2025-12-13 06:55:46,510 p=31853 u=zuul n=ansible | TASK [run_hook : Get file stat path={{ playbook_path }}] *********************** 2025-12-13 06:55:46,510 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:55:46 +0000 (0:00:00.048) 0:02:42.660 ***** 2025-12-13 06:55:46,510 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:55:46 +0000 (0:00:00.048) 0:02:42.659 ***** 2025-12-13 06:55:46,681 p=31853 u=zuul n=ansible | ok: [localhost] 2025-12-13 06:55:46,688 p=31853 u=zuul n=ansible | TASK [run_hook : Fail if playbook doesn't exist msg=Playbook {{ playbook_path }} doesn't seem to exist.] *** 2025-12-13 06:55:46,689 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:55:46 +0000 (0:00:00.178) 0:02:42.838 ***** 2025-12-13 06:55:46,689 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:55:46 +0000 (0:00:00.178) 0:02:42.837 ***** 2025-12-13 06:55:46,707 p=31853 u=zuul n=ansible | skipping: [localhost] 2025-12-13 06:55:46,715 p=31853 u=zuul n=ansible | TASK [run_hook : Get parameters files paths={{ (cifmw_basedir, 'artifacts/parameters') | path_join }}, file_type=file, patterns=*.yml] *** 2025-12-13 06:55:46,715 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:55:46 +0000 (0:00:00.026) 0:02:42.865 ***** 2025-12-13 06:55:46,715 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:55:46 +0000 (0:00:00.026) 0:02:42.863 ***** 2025-12-13 06:55:46,876 p=31853 u=zuul n=ansible | ok: [localhost] 2025-12-13 06:55:46,884 p=31853 u=zuul n=ansible | TASK [run_hook : Add parameters artifacts as extra variables extra_vars={{ extra_vars }} {% for file in cifmw_run_hook_parameters_files.files %} -e "@{{ file.path }}" {%- endfor %}] *** 2025-12-13 06:55:46,884 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:55:46 +0000 (0:00:00.168) 0:02:43.033 ***** 2025-12-13 06:55:46,884 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:55:46 +0000 (0:00:00.168) 0:02:43.032 ***** 2025-12-13 06:55:46,905 p=31853 u=zuul n=ansible | ok: [localhost] 2025-12-13 06:55:46,913 p=31853 u=zuul n=ansible | TASK [run_hook : Ensure log directory exists path={{ log_path | dirname }}, state=directory, mode=0755] *** 2025-12-13 06:55:46,913 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:55:46 +0000 (0:00:00.029) 0:02:43.062 ***** 2025-12-13 06:55:46,913 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:55:46 +0000 (0:00:00.029) 0:02:43.061 ***** 2025-12-13 06:55:47,076 p=31853 u=zuul n=ansible | ok: [localhost] 2025-12-13 06:55:47,084 p=31853 u=zuul n=ansible | TASK [run_hook : Ensure artifacts directory exists path={{ cifmw_basedir }}/artifacts, state=directory, mode=0755] *** 2025-12-13 06:55:47,084 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:55:47 +0000 (0:00:00.171) 0:02:43.233 ***** 2025-12-13 06:55:47,084 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:55:47 +0000 (0:00:00.171) 0:02:43.232 ***** 2025-12-13 06:55:47,249 p=31853 u=zuul n=ansible | ok: [localhost] 2025-12-13 06:55:47,258 p=31853 u=zuul n=ansible | TASK [run_hook : Run hook without retry - 80 Kustomize OpenStack CR] *********** 2025-12-13 06:55:47,258 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:55:47 +0000 (0:00:00.173) 0:02:43.407 ***** 2025-12-13 06:55:47,258 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:55:47 +0000 (0:00:00.173) 0:02:43.406 ***** 2025-12-13 06:55:47,307 p=31853 u=zuul n=ansible | Follow script's output here: /home/zuul/ci-framework-data/logs/ci_script_004_run_hook_without_retry_80.log 2025-12-13 06:55:48,799 p=31853 u=zuul n=ansible | changed: [localhost] 2025-12-13 06:55:48,808 p=31853 u=zuul n=ansible | TASK [run_hook : Run hook with retry - 80 Kustomize OpenStack CR] ************** 2025-12-13 06:55:48,808 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:55:48 +0000 (0:00:01.550) 0:02:44.958 ***** 2025-12-13 06:55:48,808 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:55:48 +0000 (0:00:01.550) 0:02:44.956 ***** 2025-12-13 06:55:48,829 p=31853 u=zuul n=ansible | skipping: [localhost] 2025-12-13 06:55:48,837 p=31853 u=zuul n=ansible | TASK [run_hook : Check if we have a file path={{ cifmw_basedir }}/artifacts/{{ step }}_{{ hook_name }}.yml] *** 2025-12-13 06:55:48,837 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:55:48 +0000 (0:00:00.028) 0:02:44.986 ***** 2025-12-13 06:55:48,837 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:55:48 +0000 (0:00:00.028) 0:02:44.985 ***** 2025-12-13 06:55:48,991 p=31853 u=zuul n=ansible | ok: [localhost] 2025-12-13 06:55:48,999 p=31853 u=zuul n=ansible | TASK [run_hook : Load generated content in main playbook file={{ cifmw_basedir }}/artifacts/{{ step }}_{{ hook_name }}.yml] *** 2025-12-13 06:55:48,999 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:55:48 +0000 (0:00:00.162) 0:02:45.149 ***** 2025-12-13 06:55:48,999 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:55:48 +0000 (0:00:00.162) 0:02:45.147 ***** 2025-12-13 06:55:49,017 p=31853 u=zuul n=ansible | skipping: [localhost] 2025-12-13 06:55:49,031 p=31853 u=zuul n=ansible | TASK [cifmw_setup : Load parameters files dir={{ cifmw_basedir }}/artifacts/parameters] *** 2025-12-13 06:55:49,031 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:55:49 +0000 (0:00:00.032) 0:02:45.181 ***** 2025-12-13 06:55:49,031 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:55:49 +0000 (0:00:00.032) 0:02:45.179 ***** 2025-12-13 06:55:49,111 p=31853 u=zuul n=ansible | ok: [localhost] 2025-12-13 06:55:49,119 p=31853 u=zuul n=ansible | TASK [Configure Storage Class name=ci_local_storage] *************************** 2025-12-13 06:55:49,119 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:55:49 +0000 (0:00:00.088) 0:02:45.269 ***** 2025-12-13 06:55:49,119 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:55:49 +0000 (0:00:00.088) 0:02:45.268 ***** 2025-12-13 06:55:49,250 p=31853 u=zuul n=ansible | TASK [ci_local_storage : Create role needed directories path={{ cifmw_cls_manifests_dir }}, state=directory, mode=0755] *** 2025-12-13 06:55:49,250 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:55:49 +0000 (0:00:00.130) 0:02:45.400 ***** 2025-12-13 06:55:49,250 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:55:49 +0000 (0:00:00.130) 0:02:45.399 ***** 2025-12-13 06:55:49,419 p=31853 u=zuul n=ansible | changed: [localhost] 2025-12-13 06:55:49,427 p=31853 u=zuul n=ansible | TASK [ci_local_storage : Create the cifmw_cls_namespace namespace" kubeconfig={{ cifmw_openshift_kubeconfig }}, api_key={{ cifmw_openshift_token | default(omit)}}, context={{ cifmw_openshift_context | default(omit) }}, name={{ cifmw_cls_namespace }}, kind=Namespace, state=present] *** 2025-12-13 06:55:49,427 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:55:49 +0000 (0:00:00.176) 0:02:45.577 ***** 2025-12-13 06:55:49,427 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:55:49 +0000 (0:00:00.176) 0:02:45.575 ***** 2025-12-13 06:55:50,037 p=31853 u=zuul n=ansible | ok: [localhost] 2025-12-13 06:55:50,045 p=31853 u=zuul n=ansible | TASK [ci_local_storage : Save storage manifests as artifacts dest={{ cifmw_cls_manifests_dir }}/storage-class.yaml, content={{ cifmw_cls_storage_manifest | to_nice_yaml }}, mode=0644] *** 2025-12-13 06:55:50,045 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:55:50 +0000 (0:00:00.618) 0:02:46.195 ***** 2025-12-13 06:55:50,045 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:55:50 +0000 (0:00:00.618) 0:02:46.194 ***** 2025-12-13 06:55:50,395 p=31853 u=zuul n=ansible | changed: [localhost] 2025-12-13 06:55:50,403 p=31853 u=zuul n=ansible | TASK [ci_local_storage : Get k8s nodes kubeconfig={{ cifmw_openshift_kubeconfig }}, api_key={{ cifmw_openshift_token | default(omit)}}, context={{ cifmw_openshift_context | default(omit)}}, kind=Node] *** 2025-12-13 06:55:50,403 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:55:50 +0000 (0:00:00.357) 0:02:46.553 ***** 2025-12-13 06:55:50,403 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:55:50 +0000 (0:00:00.357) 0:02:46.552 ***** 2025-12-13 06:55:51,068 p=31853 u=zuul n=ansible | ok: [localhost] 2025-12-13 06:55:51,077 p=31853 u=zuul n=ansible | TASK [ci_local_storage : Fetch hostnames for all hosts _raw_params=hostname] *** 2025-12-13 06:55:51,078 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:55:51 +0000 (0:00:00.674) 0:02:47.227 ***** 2025-12-13 06:55:51,078 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:55:51 +0000 (0:00:00.674) 0:02:47.226 ***** 2025-12-13 06:55:51,277 p=31853 u=zuul n=ansible | changed: [localhost -> compute-0(192.168.25.195)] => (item=compute-0) 2025-12-13 06:55:51,816 p=31853 u=zuul n=ansible | changed: [localhost -> crc(192.168.25.89)] => (item=crc) 2025-12-13 06:55:52,116 p=31853 u=zuul n=ansible | changed: [localhost -> controller(192.168.25.167)] => (item=controller) 2025-12-13 06:55:52,266 p=31853 u=zuul n=ansible | changed: [localhost] => (item=localhost) 2025-12-13 06:55:52,267 p=31853 u=zuul n=ansible | [WARNING]: Platform linux on host localhost is using the discovered Python interpreter at /usr/bin/python3.9, but future installation of another Python interpreter could change the meaning of that path. See https://docs.ansible.com/ansible- core/2.15/reference_appendices/interpreter_discovery.html for more information. 2025-12-13 06:55:52,275 p=31853 u=zuul n=ansible | TASK [ci_local_storage : Set the hosts k8s ansible hosts cifmw_ci_local_storage_k8s_hosts={{ _host_map | selectattr("key", "in", k8s_nodes_hostnames) | map(attribute="value") | list }}, cifmw_ci_local_storage_k8s_hostnames={{ k8s_nodes_hostnames }}] *** 2025-12-13 06:55:52,275 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:55:52 +0000 (0:00:01.197) 0:02:48.425 ***** 2025-12-13 06:55:52,275 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:55:52 +0000 (0:00:01.197) 0:02:48.424 ***** 2025-12-13 06:55:52,307 p=31853 u=zuul n=ansible | ok: [localhost] 2025-12-13 06:55:52,315 p=31853 u=zuul n=ansible | TASK [ci_local_storage : Apply the storage class manifests kubeconfig={{ cifmw_openshift_kubeconfig }}, api_key={{ cifmw_openshift_token | default(omit) }}, context={{ cifmw_openshift_context | default(omit) }}, state=present, src={{ cifmw_cls_manifests_dir }}/storage-class.yaml] *** 2025-12-13 06:55:52,315 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:55:52 +0000 (0:00:00.039) 0:02:48.464 ***** 2025-12-13 06:55:52,315 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:55:52 +0000 (0:00:00.039) 0:02:48.463 ***** 2025-12-13 06:55:52,940 p=31853 u=zuul n=ansible | changed: [localhost] 2025-12-13 06:55:52,948 p=31853 u=zuul n=ansible | TASK [ci_local_storage : Create directories on worker node _raw_params=worker_node_dirs.yml] *** 2025-12-13 06:55:52,948 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:55:52 +0000 (0:00:00.633) 0:02:49.097 ***** 2025-12-13 06:55:52,948 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:55:52 +0000 (0:00:00.633) 0:02:49.096 ***** 2025-12-13 06:55:52,978 p=31853 u=zuul n=ansible | included: /home/zuul/src/github.com/openstack-k8s-operators/ci-framework/roles/ci_local_storage/tasks/worker_node_dirs.yml for localhost => (item=crc) 2025-12-13 06:55:52,988 p=31853 u=zuul n=ansible | TASK [ci_local_storage : Perform action in the PV directory path={{ [ cifmw_cls_local_storage_name, 'pv'+ ("%02d" | format(item | int)) ] | path_join }}, state={{ 'directory' if cifmw_cls_action == 'create' else 'absent' }}, mode=0775] *** 2025-12-13 06:55:52,988 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:55:52 +0000 (0:00:00.040) 0:02:49.138 ***** 2025-12-13 06:55:52,988 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:55:52 +0000 (0:00:00.040) 0:02:49.137 ***** 2025-12-13 06:55:53,348 p=31853 u=zuul n=ansible | changed: [localhost -> crc(192.168.25.89)] => (item=1) 2025-12-13 06:55:53,693 p=31853 u=zuul n=ansible | changed: [localhost -> crc(192.168.25.89)] => (item=2) 2025-12-13 06:55:54,018 p=31853 u=zuul n=ansible | changed: [localhost -> crc(192.168.25.89)] => (item=3) 2025-12-13 06:55:54,351 p=31853 u=zuul n=ansible | changed: [localhost -> crc(192.168.25.89)] => (item=4) 2025-12-13 06:55:54,668 p=31853 u=zuul n=ansible | changed: [localhost -> crc(192.168.25.89)] => (item=5) 2025-12-13 06:55:55,012 p=31853 u=zuul n=ansible | changed: [localhost -> crc(192.168.25.89)] => (item=6) 2025-12-13 06:55:55,355 p=31853 u=zuul n=ansible | changed: [localhost -> crc(192.168.25.89)] => (item=7) 2025-12-13 06:55:55,711 p=31853 u=zuul n=ansible | changed: [localhost -> crc(192.168.25.89)] => (item=8) 2025-12-13 06:55:56,041 p=31853 u=zuul n=ansible | changed: [localhost -> crc(192.168.25.89)] => (item=9) 2025-12-13 06:55:56,373 p=31853 u=zuul n=ansible | changed: [localhost -> crc(192.168.25.89)] => (item=10) 2025-12-13 06:55:56,696 p=31853 u=zuul n=ansible | changed: [localhost -> crc(192.168.25.89)] => (item=11) 2025-12-13 06:55:57,031 p=31853 u=zuul n=ansible | changed: [localhost -> crc(192.168.25.89)] => (item=12) 2025-12-13 06:55:57,043 p=31853 u=zuul n=ansible | TASK [ci_local_storage : Generate pv related storage manifest file src=storage.yaml.j2, dest={{ cifmw_cls_manifests_dir }}/storage.yaml, mode=0644] *** 2025-12-13 06:55:57,043 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:55:57 +0000 (0:00:04.054) 0:02:53.192 ***** 2025-12-13 06:55:57,043 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:55:57 +0000 (0:00:04.054) 0:02:53.191 ***** 2025-12-13 06:55:57,382 p=31853 u=zuul n=ansible | changed: [localhost] 2025-12-13 06:55:57,390 p=31853 u=zuul n=ansible | TASK [ci_local_storage : Apply pv related storage manifest file kubeconfig={{ cifmw_openshift_kubeconfig }}, api_key={{ cifmw_openshift_token | default(omit) }}, context={{ cifmw_openshift_context | default(omit) }}, state=present, src={{ cifmw_cls_manifests_dir }}/storage.yaml] *** 2025-12-13 06:55:57,390 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:55:57 +0000 (0:00:00.347) 0:02:53.540 ***** 2025-12-13 06:55:57,390 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:55:57 +0000 (0:00:00.347) 0:02:53.538 ***** 2025-12-13 06:55:58,093 p=31853 u=zuul n=ansible | changed: [localhost] 2025-12-13 06:55:58,109 p=31853 u=zuul n=ansible | TASK [Configure LVMS Storage Class name=ci_lvms_storage] *********************** 2025-12-13 06:55:58,109 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:55:58 +0000 (0:00:00.719) 0:02:54.259 ***** 2025-12-13 06:55:58,109 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:55:58 +0000 (0:00:00.719) 0:02:54.257 ***** 2025-12-13 06:55:58,135 p=31853 u=zuul n=ansible | skipping: [localhost] 2025-12-13 06:55:58,143 p=31853 u=zuul n=ansible | TASK [Run edpm_prepare name=edpm_prepare] ************************************** 2025-12-13 06:55:58,143 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:55:58 +0000 (0:00:00.034) 0:02:54.293 ***** 2025-12-13 06:55:58,144 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:55:58 +0000 (0:00:00.034) 0:02:54.292 ***** 2025-12-13 06:55:58,257 p=31853 u=zuul n=ansible | TASK [edpm_prepare : Define minimal set of repo variables when not running on Zuul _install_yamls_repos={'OPENSTACK_BRANCH': '', "GIT_CLONE_OPTS'": '-l', "OPENSTACK_REPO'": '{{ operators_build_output[cifmw_operator_build_meta_name].git_src_dir }}'}] *** 2025-12-13 06:55:58,258 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:55:58 +0000 (0:00:00.114) 0:02:54.407 ***** 2025-12-13 06:55:58,258 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:55:58 +0000 (0:00:00.114) 0:02:54.406 ***** 2025-12-13 06:55:58,288 p=31853 u=zuul n=ansible | skipping: [localhost] 2025-12-13 06:55:58,297 p=31853 u=zuul n=ansible | TASK [edpm_prepare : Set install_yamls Makefile environment variables cifmw_edpm_prepare_common_env={{ cifmw_install_yamls_environment | combine({'PATH': cifmw_path}) | combine(_install_yamls_repos | default({})) | combine(cifmw_edpm_prepare_extra_vars | default({})) }}, cifmw_edpm_prepare_make_openstack_env={% if cifmw_operator_build_meta_name is defined and cifmw_operator_build_meta_name in operators_build_output %} OPENSTACK_IMG: {{ operators_build_output[cifmw_operator_build_meta_name].image_catalog }} {% endif %} , cifmw_edpm_prepare_make_openstack_deploy_prep_env=CLEANUP_DIR_CMD: "true" , cifmw_edpm_prepare_operators_build_output={{ operators_build_output }}] *** 2025-12-13 06:55:58,297 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:55:58 +0000 (0:00:00.039) 0:02:54.447 ***** 2025-12-13 06:55:58,298 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:55:58 +0000 (0:00:00.039) 0:02:54.446 ***** 2025-12-13 06:55:58,329 p=31853 u=zuul n=ansible | ok: [localhost] 2025-12-13 06:55:58,338 p=31853 u=zuul n=ansible | TASK [Prepare storage in CRC name=install_yamls_makes, tasks_from=make_crc_storage] *** 2025-12-13 06:55:58,338 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:55:58 +0000 (0:00:00.040) 0:02:54.488 ***** 2025-12-13 06:55:58,338 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:55:58 +0000 (0:00:00.040) 0:02:54.486 ***** 2025-12-13 06:55:58,361 p=31853 u=zuul n=ansible | skipping: [localhost] 2025-12-13 06:55:58,370 p=31853 u=zuul n=ansible | TASK [Prepare inputs name=install_yamls_makes, tasks_from=make_input] ********** 2025-12-13 06:55:58,370 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:55:58 +0000 (0:00:00.032) 0:02:54.520 ***** 2025-12-13 06:55:58,370 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:55:58 +0000 (0:00:00.032) 0:02:54.519 ***** 2025-12-13 06:55:58,416 p=31853 u=zuul n=ansible | TASK [install_yamls_makes : Debug make_input_env var=make_input_env] *********** 2025-12-13 06:55:58,417 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:55:58 +0000 (0:00:00.046) 0:02:54.566 ***** 2025-12-13 06:55:58,417 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:55:58 +0000 (0:00:00.046) 0:02:54.565 ***** 2025-12-13 06:55:58,447 p=31853 u=zuul n=ansible | ok: [localhost] => make_input_env: BMO_SETUP: false CHECKOUT_FROM_OPENSTACK_REF: 'true' INSTALL_CERT_MANAGER: false KUBECONFIG: /home/zuul/.crc/machines/crc/kubeconfig NETWORK_MTU: 1440 NNCP_DNS_SERVER: 192.168.122.10 NNCP_INTERFACE: enp7s0 OPENSTACK_K8S_BRANCH: main OUT: /home/zuul/ci-framework-data/artifacts/manifests OUTPUT_DIR: /home/zuul/ci-framework-data/artifacts/edpm PATH: /home/zuul/.crc/bin:/home/zuul/.crc/bin/oc:/home/zuul/bin:/home/zuul/.local/bin:/home/zuul/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin TEST_BRANCH: '' TEST_REPO: /home/zuul/src/github.com/openstack-k8s-operators/test-operator 2025-12-13 06:55:58,457 p=31853 u=zuul n=ansible | TASK [install_yamls_makes : Debug make_input_params var=make_input_params] ***** 2025-12-13 06:55:58,457 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:55:58 +0000 (0:00:00.040) 0:02:54.607 ***** 2025-12-13 06:55:58,457 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:55:58 +0000 (0:00:00.040) 0:02:54.606 ***** 2025-12-13 06:55:58,481 p=31853 u=zuul n=ansible | skipping: [localhost] 2025-12-13 06:55:58,490 p=31853 u=zuul n=ansible | TASK [install_yamls_makes : Run input output_dir={{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts, chdir=/home/zuul/src/github.com/openstack-k8s-operators/install_yamls, script=make input, dry_run={{ make_input_dryrun|default(false)|bool }}, extra_args={{ dict((make_input_env|default({})), **(make_input_params|default({}))) }}] *** 2025-12-13 06:55:58,490 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:55:58 +0000 (0:00:00.032) 0:02:54.639 ***** 2025-12-13 06:55:58,490 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:55:58 +0000 (0:00:00.032) 0:02:54.638 ***** 2025-12-13 06:55:58,539 p=31853 u=zuul n=ansible | Follow script's output here: /home/zuul/ci-framework-data/logs/ci_script_005_run.log 2025-12-13 06:55:59,621 p=31853 u=zuul n=ansible | [WARNING]: conditional statements should not include jinja2 templating delimiters such as {{ }} or {% %}. Found: {{ make_input_until | default(true) }} 2025-12-13 06:55:59,623 p=31853 u=zuul n=ansible | changed: [localhost] 2025-12-13 06:55:59,635 p=31853 u=zuul n=ansible | TASK [OpenStack meta-operator installation name=install_yamls_makes, tasks_from=make_openstack] *** 2025-12-13 06:55:59,635 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:55:59 +0000 (0:00:01.145) 0:02:55.785 ***** 2025-12-13 06:55:59,635 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:55:59 +0000 (0:00:01.145) 0:02:55.784 ***** 2025-12-13 06:55:59,679 p=31853 u=zuul n=ansible | TASK [install_yamls_makes : Debug make_openstack_env var=make_openstack_env] *** 2025-12-13 06:55:59,680 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:55:59 +0000 (0:00:00.044) 0:02:55.829 ***** 2025-12-13 06:55:59,680 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:55:59 +0000 (0:00:00.044) 0:02:55.828 ***** 2025-12-13 06:55:59,705 p=31853 u=zuul n=ansible | ok: [localhost] => make_openstack_env: BMO_SETUP: false CHECKOUT_FROM_OPENSTACK_REF: 'true' INSTALL_CERT_MANAGER: false KUBECONFIG: /home/zuul/.crc/machines/crc/kubeconfig NETWORK_MTU: 1440 NNCP_DNS_SERVER: 192.168.122.10 NNCP_INTERFACE: enp7s0 OPENSTACK_K8S_BRANCH: main OUT: /home/zuul/ci-framework-data/artifacts/manifests OUTPUT_DIR: /home/zuul/ci-framework-data/artifacts/edpm PATH: /home/zuul/.crc/bin:/home/zuul/.crc/bin/oc:/home/zuul/bin:/home/zuul/.local/bin:/home/zuul/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin TEST_BRANCH: '' TEST_REPO: /home/zuul/src/github.com/openstack-k8s-operators/test-operator 2025-12-13 06:55:59,712 p=31853 u=zuul n=ansible | TASK [install_yamls_makes : Debug make_openstack_params var=make_openstack_params] *** 2025-12-13 06:55:59,712 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:55:59 +0000 (0:00:00.032) 0:02:55.862 ***** 2025-12-13 06:55:59,712 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:55:59 +0000 (0:00:00.032) 0:02:55.860 ***** 2025-12-13 06:55:59,730 p=31853 u=zuul n=ansible | skipping: [localhost] 2025-12-13 06:55:59,738 p=31853 u=zuul n=ansible | TASK [install_yamls_makes : Run openstack output_dir={{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts, chdir=/home/zuul/src/github.com/openstack-k8s-operators/install_yamls, script=make openstack, dry_run={{ make_openstack_dryrun|default(false)|bool }}, extra_args={{ dict((make_openstack_env|default({})), **(make_openstack_params|default({}))) }}] *** 2025-12-13 06:55:59,738 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:55:59 +0000 (0:00:00.026) 0:02:55.888 ***** 2025-12-13 06:55:59,738 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:55:59 +0000 (0:00:00.026) 0:02:55.887 ***** 2025-12-13 06:55:59,783 p=31853 u=zuul n=ansible | Follow script's output here: /home/zuul/ci-framework-data/logs/ci_script_006_run.log 2025-12-13 06:57:54,576 p=31853 u=zuul n=ansible | [WARNING]: conditional statements should not include jinja2 templating delimiters such as {{ }} or {% %}. Found: {{ make_openstack_until | default(true) }} 2025-12-13 06:57:54,578 p=31853 u=zuul n=ansible | changed: [localhost] 2025-12-13 06:57:54,590 p=31853 u=zuul n=ansible | TASK [edpm_prepare : Wait for OpenStack subscription creation _raw_params=oc get sub openstack-operator --namespace={{ cifmw_install_yamls_defaults['OPERATOR_NAMESPACE'] }} -o=jsonpath='{.status.installplan.name}'] *** 2025-12-13 06:57:54,590 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:57:54 +0000 (0:01:54.851) 0:04:50.740 ***** 2025-12-13 06:57:54,590 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:57:54 +0000 (0:01:54.851) 0:04:50.738 ***** 2025-12-13 06:58:55,404 p=31853 u=zuul n=ansible | changed: [localhost] 2025-12-13 06:58:55,413 p=31853 u=zuul n=ansible | TASK [edpm_prepare : Wait for OpenStack operator to get installed _raw_params=oc wait InstallPlan {{ cifmw_edpm_prepare_wait_installplan_out.stdout }} --namespace={{ cifmw_install_yamls_defaults['OPERATOR_NAMESPACE'] }} --for=jsonpath='{.status.phase}'=Complete --timeout=20m] *** 2025-12-13 06:58:55,413 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:58:55 +0000 (0:01:00.823) 0:05:51.563 ***** 2025-12-13 06:58:55,414 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:58:55 +0000 (0:01:00.823) 0:05:51.562 ***** 2025-12-13 06:58:55,829 p=31853 u=zuul n=ansible | changed: [localhost] 2025-12-13 06:58:55,838 p=31853 u=zuul n=ansible | TASK [edpm_prepare : Check if the OpenStack initialization CRD exists kubeconfig={{ cifmw_openshift_kubeconfig }}, api_key={{ cifmw_openshift_token | default(omit) }}, context={{ cifmw_openshift_context | default(omit) }}, kind=CustomResourceDefinition, name=openstacks.operator.openstack.org] *** 2025-12-13 06:58:55,838 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:58:55 +0000 (0:00:00.424) 0:05:51.987 ***** 2025-12-13 06:58:55,838 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:58:55 +0000 (0:00:00.424) 0:05:51.986 ***** 2025-12-13 06:58:56,691 p=31853 u=zuul n=ansible | ok: [localhost] 2025-12-13 06:58:56,699 p=31853 u=zuul n=ansible | TASK [OpenStack meta-operator initialization, if necessary name=install_yamls_makes, tasks_from=make_openstack_init] *** 2025-12-13 06:58:56,700 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:58:56 +0000 (0:00:00.861) 0:05:52.849 ***** 2025-12-13 06:58:56,700 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:58:56 +0000 (0:00:00.861) 0:05:52.848 ***** 2025-12-13 06:58:56,756 p=31853 u=zuul n=ansible | TASK [install_yamls_makes : Debug make_openstack_init_env var=make_openstack_init_env] *** 2025-12-13 06:58:56,756 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:58:56 +0000 (0:00:00.056) 0:05:52.906 ***** 2025-12-13 06:58:56,756 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:58:56 +0000 (0:00:00.056) 0:05:52.904 ***** 2025-12-13 06:58:56,785 p=31853 u=zuul n=ansible | ok: [localhost] => make_openstack_init_env: BMO_SETUP: false CHECKOUT_FROM_OPENSTACK_REF: 'true' INSTALL_CERT_MANAGER: false KUBECONFIG: /home/zuul/.crc/machines/crc/kubeconfig NETWORK_MTU: 1440 NNCP_DNS_SERVER: 192.168.122.10 NNCP_INTERFACE: enp7s0 OPENSTACK_K8S_BRANCH: main OUT: /home/zuul/ci-framework-data/artifacts/manifests OUTPUT_DIR: /home/zuul/ci-framework-data/artifacts/edpm PATH: /home/zuul/.crc/bin:/home/zuul/.crc/bin/oc:/home/zuul/bin:/home/zuul/.local/bin:/home/zuul/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin TEST_BRANCH: '' TEST_REPO: /home/zuul/src/github.com/openstack-k8s-operators/test-operator 2025-12-13 06:58:56,792 p=31853 u=zuul n=ansible | TASK [install_yamls_makes : Debug make_openstack_init_params var=make_openstack_init_params] *** 2025-12-13 06:58:56,792 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:58:56 +0000 (0:00:00.036) 0:05:52.942 ***** 2025-12-13 06:58:56,792 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:58:56 +0000 (0:00:00.036) 0:05:52.941 ***** 2025-12-13 06:58:56,815 p=31853 u=zuul n=ansible | skipping: [localhost] 2025-12-13 06:58:56,823 p=31853 u=zuul n=ansible | TASK [install_yamls_makes : Run openstack_init output_dir={{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts, chdir=/home/zuul/src/github.com/openstack-k8s-operators/install_yamls, script=make openstack_init, dry_run={{ make_openstack_init_dryrun|default(false)|bool }}, extra_args={{ dict((make_openstack_init_env|default({})), **(make_openstack_init_params|default({}))) }}] *** 2025-12-13 06:58:56,823 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:58:56 +0000 (0:00:00.031) 0:05:52.973 ***** 2025-12-13 06:58:56,824 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:58:56 +0000 (0:00:00.031) 0:05:52.972 ***** 2025-12-13 06:58:56,876 p=31853 u=zuul n=ansible | Follow script's output here: /home/zuul/ci-framework-data/logs/ci_script_007_run_openstack.log 2025-12-13 06:59:59,674 p=31853 u=zuul n=ansible | [WARNING]: conditional statements should not include jinja2 templating delimiters such as {{ }} or {% %}. Found: {{ make_openstack_init_until | default(true) }} 2025-12-13 06:59:59,675 p=31853 u=zuul n=ansible | changed: [localhost] 2025-12-13 06:59:59,687 p=31853 u=zuul n=ansible | TASK [Update OpenStack Services containers Env name=set_openstack_containers] *** 2025-12-13 06:59:59,687 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:59:59 +0000 (0:01:02.863) 0:06:55.836 ***** 2025-12-13 06:59:59,687 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:59:59 +0000 (0:01:02.863) 0:06:55.835 ***** 2025-12-13 06:59:59,705 p=31853 u=zuul n=ansible | skipping: [localhost] 2025-12-13 06:59:59,713 p=31853 u=zuul n=ansible | TASK [edpm_prepare : Set facts for baremetal UEFI image url cifmw_update_containers_edpm_image_url={{ cifmw_build_images_output['images']['edpm-hardened-uefi']['image'] }}, cacheable=True] *** 2025-12-13 06:59:59,713 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:59:59 +0000 (0:00:00.026) 0:06:55.863 ***** 2025-12-13 06:59:59,713 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:59:59 +0000 (0:00:00.026) 0:06:55.862 ***** 2025-12-13 06:59:59,730 p=31853 u=zuul n=ansible | skipping: [localhost] 2025-12-13 06:59:59,738 p=31853 u=zuul n=ansible | TASK [Prepare OpenStack control plane CR name=install_yamls_makes, tasks_from=make_openstack_deploy_prep] *** 2025-12-13 06:59:59,738 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:59:59 +0000 (0:00:00.024) 0:06:55.888 ***** 2025-12-13 06:59:59,738 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:59:59 +0000 (0:00:00.024) 0:06:55.887 ***** 2025-12-13 06:59:59,782 p=31853 u=zuul n=ansible | TASK [install_yamls_makes : Debug make_openstack_deploy_prep_env var=make_openstack_deploy_prep_env] *** 2025-12-13 06:59:59,782 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:59:59 +0000 (0:00:00.043) 0:06:55.932 ***** 2025-12-13 06:59:59,782 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:59:59 +0000 (0:00:00.043) 0:06:55.931 ***** 2025-12-13 06:59:59,807 p=31853 u=zuul n=ansible | ok: [localhost] => make_openstack_deploy_prep_env: BMO_SETUP: false CHECKOUT_FROM_OPENSTACK_REF: 'true' CLEANUP_DIR_CMD: 'true' INSTALL_CERT_MANAGER: false KUBECONFIG: /home/zuul/.crc/machines/crc/kubeconfig NETWORK_MTU: 1440 NNCP_DNS_SERVER: 192.168.122.10 NNCP_INTERFACE: enp7s0 OPENSTACK_K8S_BRANCH: main OUT: /home/zuul/ci-framework-data/artifacts/manifests OUTPUT_DIR: /home/zuul/ci-framework-data/artifacts/edpm PATH: /home/zuul/.crc/bin:/home/zuul/.crc/bin/oc:/home/zuul/bin:/home/zuul/.local/bin:/home/zuul/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin TEST_BRANCH: '' TEST_REPO: /home/zuul/src/github.com/openstack-k8s-operators/test-operator 2025-12-13 06:59:59,814 p=31853 u=zuul n=ansible | TASK [install_yamls_makes : Debug make_openstack_deploy_prep_params var=make_openstack_deploy_prep_params] *** 2025-12-13 06:59:59,814 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:59:59 +0000 (0:00:00.031) 0:06:55.964 ***** 2025-12-13 06:59:59,814 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:59:59 +0000 (0:00:00.031) 0:06:55.962 ***** 2025-12-13 06:59:59,833 p=31853 u=zuul n=ansible | skipping: [localhost] 2025-12-13 06:59:59,841 p=31853 u=zuul n=ansible | TASK [install_yamls_makes : Run openstack_deploy_prep output_dir={{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts, chdir=/home/zuul/src/github.com/openstack-k8s-operators/install_yamls, script=make openstack_deploy_prep, dry_run={{ make_openstack_deploy_prep_dryrun|default(false)|bool }}, extra_args={{ dict((make_openstack_deploy_prep_env|default({})), **(make_openstack_deploy_prep_params|default({}))) }}] *** 2025-12-13 06:59:59,841 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:59:59 +0000 (0:00:00.027) 0:06:55.991 ***** 2025-12-13 06:59:59,842 p=31853 u=zuul n=ansible | Saturday 13 December 2025 06:59:59 +0000 (0:00:00.027) 0:06:55.990 ***** 2025-12-13 06:59:59,885 p=31853 u=zuul n=ansible | Follow script's output here: /home/zuul/ci-framework-data/logs/ci_script_008_run_openstack_deploy.log 2025-12-13 07:00:00,899 p=31853 u=zuul n=ansible | [WARNING]: conditional statements should not include jinja2 templating delimiters such as {{ }} or {% %}. Found: {{ make_openstack_deploy_prep_until | default(true) }} 2025-12-13 07:00:00,901 p=31853 u=zuul n=ansible | changed: [localhost] 2025-12-13 07:00:00,913 p=31853 u=zuul n=ansible | TASK [Deploy NetConfig name=install_yamls_makes, tasks_from=make_netconfig_deploy] *** 2025-12-13 07:00:00,913 p=31853 u=zuul n=ansible | Saturday 13 December 2025 07:00:00 +0000 (0:00:01.071) 0:06:57.063 ***** 2025-12-13 07:00:00,913 p=31853 u=zuul n=ansible | Saturday 13 December 2025 07:00:00 +0000 (0:00:01.071) 0:06:57.061 ***** 2025-12-13 07:00:00,964 p=31853 u=zuul n=ansible | TASK [install_yamls_makes : Debug make_netconfig_deploy_env var=make_netconfig_deploy_env] *** 2025-12-13 07:00:00,964 p=31853 u=zuul n=ansible | Saturday 13 December 2025 07:00:00 +0000 (0:00:00.051) 0:06:57.114 ***** 2025-12-13 07:00:00,965 p=31853 u=zuul n=ansible | Saturday 13 December 2025 07:00:00 +0000 (0:00:00.051) 0:06:57.113 ***** 2025-12-13 07:00:00,989 p=31853 u=zuul n=ansible | ok: [localhost] => make_netconfig_deploy_env: BMO_SETUP: false CHECKOUT_FROM_OPENSTACK_REF: 'true' INSTALL_CERT_MANAGER: false KUBECONFIG: /home/zuul/.crc/machines/crc/kubeconfig NETWORK_MTU: 1440 NNCP_DNS_SERVER: 192.168.122.10 NNCP_INTERFACE: enp7s0 OPENSTACK_K8S_BRANCH: main OUT: /home/zuul/ci-framework-data/artifacts/manifests OUTPUT_DIR: /home/zuul/ci-framework-data/artifacts/edpm PATH: /home/zuul/.crc/bin:/home/zuul/.crc/bin/oc:/home/zuul/bin:/home/zuul/.local/bin:/home/zuul/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin TEST_BRANCH: '' TEST_REPO: /home/zuul/src/github.com/openstack-k8s-operators/test-operator 2025-12-13 07:00:00,997 p=31853 u=zuul n=ansible | TASK [install_yamls_makes : Debug make_netconfig_deploy_params var=make_netconfig_deploy_params] *** 2025-12-13 07:00:00,997 p=31853 u=zuul n=ansible | Saturday 13 December 2025 07:00:00 +0000 (0:00:00.032) 0:06:57.147 ***** 2025-12-13 07:00:00,997 p=31853 u=zuul n=ansible | Saturday 13 December 2025 07:00:00 +0000 (0:00:00.032) 0:06:57.145 ***** 2025-12-13 07:00:01,016 p=31853 u=zuul n=ansible | skipping: [localhost] 2025-12-13 07:00:01,025 p=31853 u=zuul n=ansible | TASK [install_yamls_makes : Run netconfig_deploy output_dir={{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts, chdir=/home/zuul/src/github.com/openstack-k8s-operators/install_yamls, script=make netconfig_deploy, dry_run={{ make_netconfig_deploy_dryrun|default(false)|bool }}, extra_args={{ dict((make_netconfig_deploy_env|default({})), **(make_netconfig_deploy_params|default({}))) }}] *** 2025-12-13 07:00:01,025 p=31853 u=zuul n=ansible | Saturday 13 December 2025 07:00:01 +0000 (0:00:00.027) 0:06:57.174 ***** 2025-12-13 07:00:01,025 p=31853 u=zuul n=ansible | Saturday 13 December 2025 07:00:01 +0000 (0:00:00.027) 0:06:57.173 ***** 2025-12-13 07:00:01,069 p=31853 u=zuul n=ansible | Follow script's output here: /home/zuul/ci-framework-data/logs/ci_script_009_run_netconfig.log 2025-12-13 07:00:04,538 p=31853 u=zuul n=ansible | [WARNING]: conditional statements should not include jinja2 templating delimiters such as {{ }} or {% %}. Found: {{ make_netconfig_deploy_until | default(true) }} 2025-12-13 07:00:04,540 p=31853 u=zuul n=ansible | changed: [localhost] 2025-12-13 07:00:04,553 p=31853 u=zuul n=ansible | TASK [edpm_prepare : Kustomize and deploy OpenStackControlPlane _raw_params=kustomize_and_deploy.yml] *** 2025-12-13 07:00:04,553 p=31853 u=zuul n=ansible | Saturday 13 December 2025 07:00:04 +0000 (0:00:03.528) 0:07:00.702 ***** 2025-12-13 07:00:04,553 p=31853 u=zuul n=ansible | Saturday 13 December 2025 07:00:04 +0000 (0:00:03.528) 0:07:00.701 ***** 2025-12-13 07:00:04,583 p=31853 u=zuul n=ansible | included: /home/zuul/src/github.com/openstack-k8s-operators/ci-framework/roles/edpm_prepare/tasks/kustomize_and_deploy.yml for localhost 2025-12-13 07:00:04,600 p=31853 u=zuul n=ansible | TASK [edpm_prepare : Controlplane name _ctlplane_name=controlplane] ************ 2025-12-13 07:00:04,600 p=31853 u=zuul n=ansible | Saturday 13 December 2025 07:00:04 +0000 (0:00:00.047) 0:07:00.750 ***** 2025-12-13 07:00:04,600 p=31853 u=zuul n=ansible | Saturday 13 December 2025 07:00:04 +0000 (0:00:00.047) 0:07:00.749 ***** 2025-12-13 07:00:04,620 p=31853 u=zuul n=ansible | ok: [localhost] 2025-12-13 07:00:04,628 p=31853 u=zuul n=ansible | TASK [edpm_prepare : Set vars related to update_containers content provider cifmw_update_containers_registry={{ content_provider_os_registry_url | split('/') | first }}, cifmw_update_containers_org={{ content_provider_os_registry_url | split('/') | last }}, cifmw_update_containers_tag={{ content_provider_dlrn_md5_hash }}, cifmw_update_containers_openstack=True] *** 2025-12-13 07:00:04,628 p=31853 u=zuul n=ansible | Saturday 13 December 2025 07:00:04 +0000 (0:00:00.027) 0:07:00.778 ***** 2025-12-13 07:00:04,628 p=31853 u=zuul n=ansible | Saturday 13 December 2025 07:00:04 +0000 (0:00:00.027) 0:07:00.776 ***** 2025-12-13 07:00:04,645 p=31853 u=zuul n=ansible | skipping: [localhost] 2025-12-13 07:00:04,653 p=31853 u=zuul n=ansible | TASK [Prepare OpenStackVersion CR name=update_containers] ********************** 2025-12-13 07:00:04,653 p=31853 u=zuul n=ansible | Saturday 13 December 2025 07:00:04 +0000 (0:00:00.025) 0:07:00.803 ***** 2025-12-13 07:00:04,653 p=31853 u=zuul n=ansible | Saturday 13 December 2025 07:00:04 +0000 (0:00:00.025) 0:07:00.801 ***** 2025-12-13 07:00:04,673 p=31853 u=zuul n=ansible | skipping: [localhost] 2025-12-13 07:00:04,681 p=31853 u=zuul n=ansible | TASK [edpm_prepare : Controlplane name kustomization _ctlplane_name_kustomizations=[{'apiVersion': 'kustomize.config.k8s.io/v1beta1', 'kind': 'Kustomization', 'patches': [{'target': {'kind': 'OpenStackControlPlane'}, 'patch': '- op: replace\n path: /metadata/name\n value: {{ _ctlplane_name }}'}]}]] *** 2025-12-13 07:00:04,681 p=31853 u=zuul n=ansible | Saturday 13 December 2025 07:00:04 +0000 (0:00:00.028) 0:07:00.831 ***** 2025-12-13 07:00:04,681 p=31853 u=zuul n=ansible | Saturday 13 December 2025 07:00:04 +0000 (0:00:00.028) 0:07:00.830 ***** 2025-12-13 07:00:04,701 p=31853 u=zuul n=ansible | ok: [localhost] 2025-12-13 07:00:04,715 p=31853 u=zuul n=ansible | TASK [edpm_prepare : Perform kustomizations to the OpenStackControlPlane CR target_path={{ cifmw_edpm_prepare_openstack_crs_path }}, sort_ascending=False, kustomizations={{ cifmw_edpm_prepare_kustomizations + _ctlplane_name_kustomizations + (cifmw_edpm_prepare_extra_kustomizations | default([])) }}, kustomizations_paths={{ [ ( [ cifmw_edpm_prepare_manifests_dir, 'kustomizations', 'controlplane' ] | ansible.builtin.path_join ) ] }}] *** 2025-12-13 07:00:04,715 p=31853 u=zuul n=ansible | Saturday 13 December 2025 07:00:04 +0000 (0:00:00.033) 0:07:00.864 ***** 2025-12-13 07:00:04,715 p=31853 u=zuul n=ansible | Saturday 13 December 2025 07:00:04 +0000 (0:00:00.033) 0:07:00.863 ***** 2025-12-13 07:00:05,520 p=31853 u=zuul n=ansible | changed: [localhost] 2025-12-13 07:00:05,529 p=31853 u=zuul n=ansible | TASK [edpm_prepare : Log the CR that is about to be applied var=cifmw_edpm_prepare_crs_kustomize_result] *** 2025-12-13 07:00:05,529 p=31853 u=zuul n=ansible | Saturday 13 December 2025 07:00:05 +0000 (0:00:00.814) 0:07:01.679 ***** 2025-12-13 07:00:05,529 p=31853 u=zuul n=ansible | Saturday 13 December 2025 07:00:05 +0000 (0:00:00.814) 0:07:01.677 ***** 2025-12-13 07:00:05,565 p=31853 u=zuul n=ansible | ok: [localhost] => cifmw_edpm_prepare_crs_kustomize_result: changed: true count: 5 failed: false kustomizations_paths: - /home/zuul/ci-framework-data/artifacts/manifests/openstack/openstack/cr/kustomization.yaml - /home/zuul/ci-framework-data/artifacts/manifests/kustomizations/controlplane/99-kustomization.yaml - /home/zuul/ci-framework-data/artifacts/manifests/kustomizations/controlplane/95-hci-pre-kustomization.yaml - /home/zuul/ci-framework-data/artifacts/manifests/kustomizations/controlplane/80-horizon-kustomization.yaml output_path: /home/zuul/ci-framework-data/artifacts/manifests/openstack/openstack/cr/cifmw-kustomization-result.yaml result: - apiVersion: core.openstack.org/v1beta1 kind: OpenStackControlPlane metadata: labels: created-by: install_yamls name: controlplane namespace: openstack spec: barbican: apiOverride: route: {} template: barbicanAPI: override: service: internal: metadata: annotations: metallb.universe.tf/address-pool: internalapi metallb.universe.tf/allow-shared-ip: internalapi metallb.universe.tf/loadBalancerIPs: 172.17.0.80 spec: type: LoadBalancer replicas: 1 barbicanKeystoneListener: replicas: 1 barbicanWorker: replicas: 1 databaseInstance: openstack secret: osp-secret cinder: apiOverride: route: {} template: cinderAPI: override: service: internal: metadata: annotations: metallb.universe.tf/address-pool: internalapi metallb.universe.tf/allow-shared-ip: internalapi metallb.universe.tf/loadBalancerIPs: 172.17.0.80 spec: type: LoadBalancer cinderBackup: networkAttachments: - storage replicas: 0 cinderScheduler: replicas: 1 cinderVolumes: volume1: networkAttachments: - storage replicas: 0 databaseInstance: openstack secret: osp-secret designate: apiOverride: route: {} enabled: false template: databaseInstance: openstack designateAPI: override: service: internal: metadata: annotations: metallb.universe.tf/address-pool: internalapi metallb.universe.tf/allow-shared-ip: internalapi metallb.universe.tf/loadBalancerIPs: 172.17.0.80 spec: type: LoadBalancer designateBackendbind9: networkAttachments: - designate replicas: 1 storageClass: local-storage storageRequest: 10G designateCentral: replicas: 1 designateMdns: networkAttachments: - designate replicas: 1 designateProducer: replicas: 1 designateWorker: networkAttachments: - designate replicas: 1 secret: osp-secret dns: template: options: - key: server values: - 192.168.122.10 - key: no-negcache values: [] override: service: metadata: annotations: metallb.universe.tf/address-pool: ctlplane metallb.universe.tf/allow-shared-ip: ctlplane metallb.universe.tf/loadBalancerIPs: 192.168.122.80 spec: type: LoadBalancer replicas: 1 galera: templates: openstack: replicas: 1 secret: osp-secret storageRequest: 10G openstack-cell1: replicas: 1 secret: osp-secret storageRequest: 10G glance: apiOverrides: default: route: {} template: customServiceConfig: | [DEFAULT] enabled_backends = default_backend:swift [glance_store] default_backend = default_backend [default_backend] swift_store_create_container_on_put = True swift_store_auth_version = 3 swift_store_auth_address = {{ .KeystoneInternalURL }} swift_store_endpoint_type = internalURL swift_store_user = service:glance swift_store_key = {{ .ServicePassword }} databaseInstance: openstack glanceAPIs: default: networkAttachments: - storage override: service: internal: metadata: annotations: metallb.universe.tf/address-pool: internalapi metallb.universe.tf/allow-shared-ip: internalapi metallb.universe.tf/loadBalancerIPs: 172.17.0.80 spec: type: LoadBalancer replicas: 0 type: split keystoneEndpoint: default secret: osp-secret storage: storageClass: '' storageRequest: 10G heat: apiOverride: route: {} cnfAPIOverride: route: {} enabled: false template: databaseInstance: openstack heatAPI: override: service: internal: metadata: annotations: metallb.universe.tf/address-pool: internalapi metallb.universe.tf/allow-shared-ip: internalapi metallb.universe.tf/loadBalancerIPs: 172.17.0.80 spec: type: LoadBalancer replicas: 1 heatEngine: override: service: internal: metadata: annotations: metallb.universe.tf/address-pool: internalapi metallb.universe.tf/allow-shared-ip: internalapi metallb.universe.tf/loadBalancerIPs: 172.17.0.80 spec: type: LoadBalancer replicas: 1 secret: osp-secret horizon: apiOverride: route: {} enabled: true template: memcachedInstance: memcached replicas: 1 secret: osp-secret ironic: enabled: false template: databaseInstance: openstack ironicAPI: replicas: 1 ironicConductors: - replicas: 1 storageRequest: 10G ironicInspector: replicas: 1 ironicNeutronAgent: replicas: 1 secret: osp-secret keystone: apiOverride: route: {} template: databaseInstance: openstack override: service: internal: metadata: annotations: metallb.universe.tf/address-pool: internalapi metallb.universe.tf/allow-shared-ip: internalapi metallb.universe.tf/loadBalancerIPs: 172.17.0.80 spec: type: LoadBalancer secret: osp-secret manila: apiOverride: route: {} template: databaseInstance: openstack manilaAPI: networkAttachments: - internalapi override: service: internal: metadata: annotations: metallb.universe.tf/address-pool: internalapi metallb.universe.tf/allow-shared-ip: internalapi metallb.universe.tf/loadBalancerIPs: 172.17.0.80 spec: type: LoadBalancer replicas: 1 manilaScheduler: replicas: 1 manilaShares: share1: networkAttachments: - storage replicas: 1 memcached: templates: memcached: replicas: 1 neutron: apiOverride: route: {} template: databaseInstance: openstack networkAttachments: - internalapi override: service: internal: metadata: annotations: metallb.universe.tf/address-pool: internalapi metallb.universe.tf/allow-shared-ip: internalapi metallb.universe.tf/loadBalancerIPs: 172.17.0.80 spec: type: LoadBalancer secret: osp-secret nova: apiOverride: route: {} template: apiServiceTemplate: override: service: internal: metadata: annotations: metallb.universe.tf/address-pool: internalapi metallb.universe.tf/allow-shared-ip: internalapi metallb.universe.tf/loadBalancerIPs: 172.17.0.80 spec: type: LoadBalancer cellTemplates: cell0: cellDatabaseAccount: nova-cell0 cellDatabaseInstance: openstack cellMessageBusInstance: rabbitmq conductorServiceTemplate: replicas: 1 hasAPIAccess: true cell1: cellDatabaseAccount: nova-cell1 cellDatabaseInstance: openstack-cell1 cellMessageBusInstance: rabbitmq-cell1 conductorServiceTemplate: replicas: 1 hasAPIAccess: true metadataServiceTemplate: override: service: metadata: annotations: metallb.universe.tf/address-pool: internalapi metallb.universe.tf/allow-shared-ip: internalapi metallb.universe.tf/loadBalancerIPs: 172.17.0.80 spec: type: LoadBalancer secret: osp-secret octavia: enabled: false template: databaseInstance: openstack octaviaAPI: replicas: 1 secret: osp-secret ovn: template: ovnController: networkAttachment: tenant nicMappings: datacentre: ospbr ovnDBCluster: ovndbcluster-nb: dbType: NB networkAttachment: internalapi storageRequest: 10G ovndbcluster-sb: dbType: SB networkAttachment: internalapi storageRequest: 10G placement: apiOverride: route: {} template: databaseInstance: openstack override: service: internal: metadata: annotations: metallb.universe.tf/address-pool: internalapi metallb.universe.tf/allow-shared-ip: internalapi metallb.universe.tf/loadBalancerIPs: 172.17.0.80 spec: type: LoadBalancer secret: osp-secret rabbitmq: templates: rabbitmq: override: service: metadata: annotations: metallb.universe.tf/address-pool: internalapi metallb.universe.tf/loadBalancerIPs: 172.17.0.85 spec: type: LoadBalancer rabbitmq-cell1: override: service: metadata: annotations: metallb.universe.tf/address-pool: internalapi metallb.universe.tf/loadBalancerIPs: 172.17.0.86 spec: type: LoadBalancer redis: enabled: false secret: osp-secret storageClass: local-storage swift: enabled: false proxyOverride: route: {} template: swiftProxy: networkAttachments: - storage override: service: internal: metadata: annotations: metallb.universe.tf/address-pool: internalapi metallb.universe.tf/allow-shared-ip: internalapi metallb.universe.tf/loadBalancerIPs: 172.17.0.80 spec: type: LoadBalancer replicas: 1 swiftRing: ringReplicas: 1 swiftStorage: networkAttachments: - storage replicas: 1 telemetry: enabled: true template: autoscaling: aodh: databaseAccount: aodh databaseInstance: openstack passwordSelectors: null secret: osp-secret enabled: false heatInstance: heat ceilometer: enabled: true secret: osp-secret cloudkitty: apiTimeout: 0 cloudKittyAPI: override: service: internal: metadata: annotations: metallb.universe.tf/address-pool: internalapi metallb.universe.tf/allow-shared-ip: internalapi metallb.universe.tf/loadBalancerIPs: 172.17.0.80 spec: type: LoadBalancer replicas: 1 resources: {} tls: api: internal: {} public: {} caBundleSecretName: combined-ca-bundle cloudKittyProc: replicas: 1 resources: {} tls: caBundleSecretName: combined-ca-bundle databaseAccount: cloudkitty databaseInstance: openstack enabled: false memcachedInstance: memcached passwordSelector: aodhService: AodhPassword ceilometerService: CeilometerPassword cloudKittyService: CloudKittyPassword preserveJobs: false rabbitMqClusterName: rabbitmq s3StorageConfig: schemas: - effectiveDate: '2024-11-18' version: v13 secret: name: logging-loki-s3 type: s3 secret: osp-secret serviceUser: cloudkitty storageClass: local-storage logging: annotations: metallb.universe.tf/address-pool: internalapi metallb.universe.tf/allow-shared-ip: internalapi metallb.universe.tf/loadBalancerIPs: 172.17.0.80 cloNamespace: openshift-logging enabled: false ipaddr: 172.17.0.80 port: 10514 metricStorage: enabled: false monitoringStack: alertingEnabled: true scrapeInterval: 30s storage: persistent: pvcStorageRequest: 10G retention: 24h strategy: persistent 2025-12-13 07:00:05,573 p=31853 u=zuul n=ansible | TASK [edpm_prepare : Apply the OpenStackControlPlane CR output_dir={{ cifmw_edpm_prepare_basedir }}/artifacts, script=oc apply -f {{ cifmw_edpm_prepare_crs_kustomize_result.output_path }}] *** 2025-12-13 07:00:05,574 p=31853 u=zuul n=ansible | Saturday 13 December 2025 07:00:05 +0000 (0:00:00.044) 0:07:01.723 ***** 2025-12-13 07:00:05,574 p=31853 u=zuul n=ansible | Saturday 13 December 2025 07:00:05 +0000 (0:00:00.044) 0:07:01.722 ***** 2025-12-13 07:00:05,618 p=31853 u=zuul n=ansible | Follow script's output here: /home/zuul/ci-framework-data/logs/ci_script_010_apply_the.log 2025-12-13 07:00:05,847 p=31853 u=zuul n=ansible | changed: [localhost] 2025-12-13 07:00:05,854 p=31853 u=zuul n=ansible | TASK [edpm_prepare : Wait for control plane to change its status seconds={{ cifmw_edpm_prepare_wait_controplane_status_change_sec }}] *** 2025-12-13 07:00:05,854 p=31853 u=zuul n=ansible | Saturday 13 December 2025 07:00:05 +0000 (0:00:00.280) 0:07:02.004 ***** 2025-12-13 07:00:05,854 p=31853 u=zuul n=ansible | Saturday 13 December 2025 07:00:05 +0000 (0:00:00.280) 0:07:02.003 ***** 2025-12-13 07:00:05,874 p=31853 u=zuul n=ansible | Pausing for 30 seconds 2025-12-13 07:00:35,879 p=31853 u=zuul n=ansible | ok: [localhost] 2025-12-13 07:00:35,886 p=31853 u=zuul n=ansible | TASK [edpm_prepare : Wait for OpenStack controlplane to be deployed _raw_params=oc wait OpenStackControlPlane {{ _ctlplane_name }} --namespace={{ cifmw_install_yamls_defaults['NAMESPACE'] }} --for=condition=ready --timeout={{ cifmw_edpm_prepare_timeout }}m] *** 2025-12-13 07:00:35,887 p=31853 u=zuul n=ansible | Saturday 13 December 2025 07:00:35 +0000 (0:00:30.032) 0:07:32.036 ***** 2025-12-13 07:00:35,887 p=31853 u=zuul n=ansible | Saturday 13 December 2025 07:00:35 +0000 (0:00:30.032) 0:07:32.035 ***** 2025-12-13 07:04:42,189 p=31853 u=zuul n=ansible | changed: [localhost] 2025-12-13 07:04:42,195 p=31853 u=zuul n=ansible | TASK [Extract and install OpenStackControlplane CA role=install_openstack_ca] *** 2025-12-13 07:04:42,195 p=31853 u=zuul n=ansible | Saturday 13 December 2025 07:04:42 +0000 (0:04:06.308) 0:11:38.345 ***** 2025-12-13 07:04:42,196 p=31853 u=zuul n=ansible | Saturday 13 December 2025 07:04:42 +0000 (0:04:06.308) 0:11:38.344 ***** 2025-12-13 07:04:42,263 p=31853 u=zuul n=ansible | TASK [install_openstack_ca : Get CA bundle data with retries] ****************** 2025-12-13 07:04:42,263 p=31853 u=zuul n=ansible | Saturday 13 December 2025 07:04:42 +0000 (0:00:00.067) 0:11:38.413 ***** 2025-12-13 07:04:42,263 p=31853 u=zuul n=ansible | Saturday 13 December 2025 07:04:42 +0000 (0:00:00.067) 0:11:38.412 ***** 2025-12-13 07:04:42,562 p=31853 u=zuul n=ansible | changed: [localhost] 2025-12-13 07:04:42,569 p=31853 u=zuul n=ansible | TASK [install_openstack_ca : Set _ca_bundle fact if CA returned from OCP] ****** 2025-12-13 07:04:42,569 p=31853 u=zuul n=ansible | Saturday 13 December 2025 07:04:42 +0000 (0:00:00.306) 0:11:38.719 ***** 2025-12-13 07:04:42,569 p=31853 u=zuul n=ansible | Saturday 13 December 2025 07:04:42 +0000 (0:00:00.306) 0:11:38.718 ***** 2025-12-13 07:04:42,594 p=31853 u=zuul n=ansible | ok: [localhost] 2025-12-13 07:04:42,601 p=31853 u=zuul n=ansible | TASK [install_openstack_ca : Creating tls-ca-bundle.pem from CA bundle dest={{ cifmw_install_openstack_ca_file_full_path }}, content={{ _ca_bundle }}, mode=0644] *** 2025-12-13 07:04:42,601 p=31853 u=zuul n=ansible | Saturday 13 December 2025 07:04:42 +0000 (0:00:00.032) 0:11:38.751 ***** 2025-12-13 07:04:42,601 p=31853 u=zuul n=ansible | Saturday 13 December 2025 07:04:42 +0000 (0:00:00.032) 0:11:38.750 ***** 2025-12-13 07:04:42,957 p=31853 u=zuul n=ansible | changed: [localhost] 2025-12-13 07:04:42,965 p=31853 u=zuul n=ansible | TASK [install_openstack_ca : Check if OpenStackControlplane CA file is present path={{ cifmw_install_openstack_ca_file_full_path }}, get_attributes=False, get_checksum=False, get_mime=False] *** 2025-12-13 07:04:42,965 p=31853 u=zuul n=ansible | Saturday 13 December 2025 07:04:42 +0000 (0:00:00.363) 0:11:39.114 ***** 2025-12-13 07:04:42,965 p=31853 u=zuul n=ansible | Saturday 13 December 2025 07:04:42 +0000 (0:00:00.363) 0:11:39.113 ***** 2025-12-13 07:04:43,124 p=31853 u=zuul n=ansible | ok: [localhost] 2025-12-13 07:04:43,131 p=31853 u=zuul n=ansible | TASK [Call install_ca role to inject OpenStackControlplane CA file if present role=install_ca] *** 2025-12-13 07:04:43,132 p=31853 u=zuul n=ansible | Saturday 13 December 2025 07:04:43 +0000 (0:00:00.166) 0:11:39.281 ***** 2025-12-13 07:04:43,132 p=31853 u=zuul n=ansible | Saturday 13 December 2025 07:04:43 +0000 (0:00:00.166) 0:11:39.280 ***** 2025-12-13 07:04:43,177 p=31853 u=zuul n=ansible | TASK [install_ca : Ensure target directory exists path={{ cifmw_install_ca_trust_dir }}, state=directory, mode=0755] *** 2025-12-13 07:04:43,177 p=31853 u=zuul n=ansible | Saturday 13 December 2025 07:04:43 +0000 (0:00:00.045) 0:11:39.326 ***** 2025-12-13 07:04:43,177 p=31853 u=zuul n=ansible | Saturday 13 December 2025 07:04:43 +0000 (0:00:00.045) 0:11:39.325 ***** 2025-12-13 07:04:43,362 p=31853 u=zuul n=ansible | ok: [localhost] 2025-12-13 07:04:43,370 p=31853 u=zuul n=ansible | TASK [install_ca : Install internal CA from url url={{ cifmw_install_ca_url }}, dest={{ cifmw_install_ca_trust_dir }}, validate_certs={{ cifmw_install_ca_url_validate_certs | default(omit) }}, mode=0644] *** 2025-12-13 07:04:43,370 p=31853 u=zuul n=ansible | Saturday 13 December 2025 07:04:43 +0000 (0:00:00.193) 0:11:39.520 ***** 2025-12-13 07:04:43,370 p=31853 u=zuul n=ansible | Saturday 13 December 2025 07:04:43 +0000 (0:00:00.193) 0:11:39.518 ***** 2025-12-13 07:04:43,393 p=31853 u=zuul n=ansible | skipping: [localhost] 2025-12-13 07:04:43,401 p=31853 u=zuul n=ansible | TASK [install_ca : Install custom CA bundle from inline dest={{ cifmw_install_ca_trust_dir }}/cifmw_inline_ca_bundle.crt, content={{ cifmw_install_ca_bundle_inline }}, mode=0644] *** 2025-12-13 07:04:43,401 p=31853 u=zuul n=ansible | Saturday 13 December 2025 07:04:43 +0000 (0:00:00.030) 0:11:39.551 ***** 2025-12-13 07:04:43,401 p=31853 u=zuul n=ansible | Saturday 13 December 2025 07:04:43 +0000 (0:00:00.030) 0:11:39.549 ***** 2025-12-13 07:04:43,422 p=31853 u=zuul n=ansible | skipping: [localhost] 2025-12-13 07:04:43,430 p=31853 u=zuul n=ansible | TASK [install_ca : Install custom CA bundle from file dest={{ cifmw_install_ca_trust_dir }}/{{ cifmw_install_ca_bundle_src | basename }}, src={{ cifmw_install_ca_bundle_src }}, mode=0644] *** 2025-12-13 07:04:43,430 p=31853 u=zuul n=ansible | Saturday 13 December 2025 07:04:43 +0000 (0:00:00.029) 0:11:39.580 ***** 2025-12-13 07:04:43,430 p=31853 u=zuul n=ansible | Saturday 13 December 2025 07:04:43 +0000 (0:00:00.029) 0:11:39.579 ***** 2025-12-13 07:04:43,831 p=31853 u=zuul n=ansible | changed: [localhost] 2025-12-13 07:04:43,838 p=31853 u=zuul n=ansible | TASK [install_ca : Update ca bundle _raw_params=update-ca-trust] *************** 2025-12-13 07:04:43,838 p=31853 u=zuul n=ansible | Saturday 13 December 2025 07:04:43 +0000 (0:00:00.408) 0:11:39.988 ***** 2025-12-13 07:04:43,838 p=31853 u=zuul n=ansible | Saturday 13 December 2025 07:04:43 +0000 (0:00:00.408) 0:11:39.987 ***** 2025-12-13 07:04:45,097 p=31853 u=zuul n=ansible | changed: [localhost] 2025-12-13 07:04:45,116 p=31853 u=zuul n=ansible | TASK [edpm_prepare : Extract keystone endpoint host _raw_params=oc get keystoneapi keystone --namespace={{ cifmw_install_yamls_defaults['NAMESPACE'] }} -o jsonpath='{ .status.apiEndpoints.public }'] *** 2025-12-13 07:04:45,116 p=31853 u=zuul n=ansible | Saturday 13 December 2025 07:04:45 +0000 (0:00:01.278) 0:11:41.266 ***** 2025-12-13 07:04:45,116 p=31853 u=zuul n=ansible | Saturday 13 December 2025 07:04:45 +0000 (0:00:01.278) 0:11:41.265 ***** 2025-12-13 07:04:45,406 p=31853 u=zuul n=ansible | changed: [localhost] 2025-12-13 07:04:45,414 p=31853 u=zuul n=ansible | TASK [edpm_prepare : Wait for keystone endpoint to exist in DNS url={{ _cifmw_edpm_prepare_keystone_endpoint_out.stdout | trim }}, status_code={{ _keystone_response_codes }}, validate_certs={{ cifmw_edpm_prepare_verify_tls }}] *** 2025-12-13 07:04:45,414 p=31853 u=zuul n=ansible | Saturday 13 December 2025 07:04:45 +0000 (0:00:00.297) 0:11:41.563 ***** 2025-12-13 07:04:45,414 p=31853 u=zuul n=ansible | Saturday 13 December 2025 07:04:45 +0000 (0:00:00.297) 0:11:41.562 ***** 2025-12-13 07:04:45,793 p=31853 u=zuul n=ansible | ok: [localhost] 2025-12-13 07:04:45,806 p=31853 u=zuul n=ansible | TASK [run_hook : Assert parameters are valid quiet=True, that=['_list_hooks is not string', '_list_hooks is not mapping', '_list_hooks is iterable', '(hooks | default([])) is not string', '(hooks | default([])) is not mapping', '(hooks | default([])) is iterable']] *** 2025-12-13 07:04:45,807 p=31853 u=zuul n=ansible | Saturday 13 December 2025 07:04:45 +0000 (0:00:00.392) 0:11:41.956 ***** 2025-12-13 07:04:45,807 p=31853 u=zuul n=ansible | Saturday 13 December 2025 07:04:45 +0000 (0:00:00.392) 0:11:41.955 ***** 2025-12-13 07:04:45,858 p=31853 u=zuul n=ansible | ok: [localhost] 2025-12-13 07:04:45,865 p=31853 u=zuul n=ansible | TASK [run_hook : Assert single hooks are all mappings quiet=True, that=['_not_mapping_hooks | length == 0'], msg=All single hooks must be a list of mappings or a mapping.] *** 2025-12-13 07:04:45,866 p=31853 u=zuul n=ansible | Saturday 13 December 2025 07:04:45 +0000 (0:00:00.058) 0:11:42.015 ***** 2025-12-13 07:04:45,866 p=31853 u=zuul n=ansible | Saturday 13 December 2025 07:04:45 +0000 (0:00:00.058) 0:11:42.014 ***** 2025-12-13 07:04:45,943 p=31853 u=zuul n=ansible | ok: [localhost] 2025-12-13 07:04:45,952 p=31853 u=zuul n=ansible | TASK [run_hook : Loop on hooks for post_ctlplane_deploy _raw_params={{ hook.type }}.yml] *** 2025-12-13 07:04:45,952 p=31853 u=zuul n=ansible | Saturday 13 December 2025 07:04:45 +0000 (0:00:00.086) 0:11:42.102 ***** 2025-12-13 07:04:45,952 p=31853 u=zuul n=ansible | Saturday 13 December 2025 07:04:45 +0000 (0:00:00.086) 0:11:42.101 ***** 2025-12-13 07:04:46,051 p=31853 u=zuul n=ansible | included: /home/zuul/src/github.com/openstack-k8s-operators/ci-framework/roles/run_hook/tasks/playbook.yml for localhost => (item={'name': 'Tune rabbitmq resources', 'type': 'playbook', 'source': 'rabbitmq_tuning.yml'}) 2025-12-13 07:04:46,062 p=31853 u=zuul n=ansible | TASK [run_hook : Set playbook path for Tune rabbitmq resources cifmw_basedir={{ _bdir }}, hook_name={{ _hook_name }}, playbook_path={{ _play | realpath }}, log_path={{ _bdir }}/logs/{{ step }}_{{ _hook_name }}.log, extra_vars=-e operator_namespace={{ _operator_namespace }} -e namespace={{ _namespace}} {%- if hook.extra_vars is defined and hook.extra_vars|length > 0 -%} {% for key,value in hook.extra_vars.items() -%} {%- if key == 'file' %} -e "@{{ value }}" {%- else %} -e "{{ key }}={{ value }}" {%- endif %} {%- endfor %} {%- endif %}] *** 2025-12-13 07:04:46,062 p=31853 u=zuul n=ansible | Saturday 13 December 2025 07:04:46 +0000 (0:00:00.109) 0:11:42.211 ***** 2025-12-13 07:04:46,062 p=31853 u=zuul n=ansible | Saturday 13 December 2025 07:04:46 +0000 (0:00:00.109) 0:11:42.210 ***** 2025-12-13 07:04:46,106 p=31853 u=zuul n=ansible | ok: [localhost] 2025-12-13 07:04:46,114 p=31853 u=zuul n=ansible | TASK [run_hook : Get file stat path={{ playbook_path }}] *********************** 2025-12-13 07:04:46,114 p=31853 u=zuul n=ansible | Saturday 13 December 2025 07:04:46 +0000 (0:00:00.052) 0:11:42.263 ***** 2025-12-13 07:04:46,114 p=31853 u=zuul n=ansible | Saturday 13 December 2025 07:04:46 +0000 (0:00:00.052) 0:11:42.262 ***** 2025-12-13 07:04:46,291 p=31853 u=zuul n=ansible | ok: [localhost] 2025-12-13 07:04:46,300 p=31853 u=zuul n=ansible | TASK [run_hook : Fail if playbook doesn't exist msg=Playbook {{ playbook_path }} doesn't seem to exist.] *** 2025-12-13 07:04:46,300 p=31853 u=zuul n=ansible | Saturday 13 December 2025 07:04:46 +0000 (0:00:00.186) 0:11:42.450 ***** 2025-12-13 07:04:46,301 p=31853 u=zuul n=ansible | Saturday 13 December 2025 07:04:46 +0000 (0:00:00.186) 0:11:42.449 ***** 2025-12-13 07:04:46,323 p=31853 u=zuul n=ansible | skipping: [localhost] 2025-12-13 07:04:46,331 p=31853 u=zuul n=ansible | TASK [run_hook : Get parameters files paths={{ (cifmw_basedir, 'artifacts/parameters') | path_join }}, file_type=file, patterns=*.yml] *** 2025-12-13 07:04:46,332 p=31853 u=zuul n=ansible | Saturday 13 December 2025 07:04:46 +0000 (0:00:00.031) 0:11:42.481 ***** 2025-12-13 07:04:46,332 p=31853 u=zuul n=ansible | Saturday 13 December 2025 07:04:46 +0000 (0:00:00.031) 0:11:42.480 ***** 2025-12-13 07:04:46,495 p=31853 u=zuul n=ansible | ok: [localhost] 2025-12-13 07:04:46,503 p=31853 u=zuul n=ansible | TASK [run_hook : Add parameters artifacts as extra variables extra_vars={{ extra_vars }} {% for file in cifmw_run_hook_parameters_files.files %} -e "@{{ file.path }}" {%- endfor %}] *** 2025-12-13 07:04:46,503 p=31853 u=zuul n=ansible | Saturday 13 December 2025 07:04:46 +0000 (0:00:00.171) 0:11:42.653 ***** 2025-12-13 07:04:46,503 p=31853 u=zuul n=ansible | Saturday 13 December 2025 07:04:46 +0000 (0:00:00.171) 0:11:42.651 ***** 2025-12-13 07:04:46,527 p=31853 u=zuul n=ansible | ok: [localhost] 2025-12-13 07:04:46,535 p=31853 u=zuul n=ansible | TASK [run_hook : Ensure log directory exists path={{ log_path | dirname }}, state=directory, mode=0755] *** 2025-12-13 07:04:46,535 p=31853 u=zuul n=ansible | Saturday 13 December 2025 07:04:46 +0000 (0:00:00.032) 0:11:42.685 ***** 2025-12-13 07:04:46,535 p=31853 u=zuul n=ansible | Saturday 13 December 2025 07:04:46 +0000 (0:00:00.032) 0:11:42.684 ***** 2025-12-13 07:04:46,700 p=31853 u=zuul n=ansible | ok: [localhost] 2025-12-13 07:04:46,708 p=31853 u=zuul n=ansible | TASK [run_hook : Ensure artifacts directory exists path={{ cifmw_basedir }}/artifacts, state=directory, mode=0755] *** 2025-12-13 07:04:46,708 p=31853 u=zuul n=ansible | Saturday 13 December 2025 07:04:46 +0000 (0:00:00.172) 0:11:42.857 ***** 2025-12-13 07:04:46,708 p=31853 u=zuul n=ansible | Saturday 13 December 2025 07:04:46 +0000 (0:00:00.172) 0:11:42.856 ***** 2025-12-13 07:04:46,877 p=31853 u=zuul n=ansible | ok: [localhost] 2025-12-13 07:04:46,887 p=31853 u=zuul n=ansible | TASK [run_hook : Run hook without retry - Tune rabbitmq resources] ************* 2025-12-13 07:04:46,887 p=31853 u=zuul n=ansible | Saturday 13 December 2025 07:04:46 +0000 (0:00:00.179) 0:11:43.037 ***** 2025-12-13 07:04:46,887 p=31853 u=zuul n=ansible | Saturday 13 December 2025 07:04:46 +0000 (0:00:00.179) 0:11:43.036 ***** 2025-12-13 07:04:46,939 p=31853 u=zuul n=ansible | Follow script's output here: /home/zuul/ci-framework-data/logs/ci_script_011_run_hook_without_retry_tune.log 2025-12-13 07:04:49,112 p=31853 u=zuul n=ansible | changed: [localhost] 2025-12-13 07:04:49,121 p=31853 u=zuul n=ansible | TASK [run_hook : Run hook with retry - Tune rabbitmq resources] **************** 2025-12-13 07:04:49,121 p=31853 u=zuul n=ansible | Saturday 13 December 2025 07:04:49 +0000 (0:00:02.233) 0:11:45.271 ***** 2025-12-13 07:04:49,121 p=31853 u=zuul n=ansible | Saturday 13 December 2025 07:04:49 +0000 (0:00:02.233) 0:11:45.269 ***** 2025-12-13 07:04:49,146 p=31853 u=zuul n=ansible | skipping: [localhost] 2025-12-13 07:04:49,154 p=31853 u=zuul n=ansible | TASK [run_hook : Check if we have a file path={{ cifmw_basedir }}/artifacts/{{ step }}_{{ hook_name }}.yml] *** 2025-12-13 07:04:49,154 p=31853 u=zuul n=ansible | Saturday 13 December 2025 07:04:49 +0000 (0:00:00.033) 0:11:45.304 ***** 2025-12-13 07:04:49,154 p=31853 u=zuul n=ansible | Saturday 13 December 2025 07:04:49 +0000 (0:00:00.033) 0:11:45.303 ***** 2025-12-13 07:04:49,310 p=31853 u=zuul n=ansible | ok: [localhost] 2025-12-13 07:04:49,317 p=31853 u=zuul n=ansible | TASK [run_hook : Load generated content in main playbook file={{ cifmw_basedir }}/artifacts/{{ step }}_{{ hook_name }}.yml] *** 2025-12-13 07:04:49,317 p=31853 u=zuul n=ansible | Saturday 13 December 2025 07:04:49 +0000 (0:00:00.163) 0:11:45.467 ***** 2025-12-13 07:04:49,318 p=31853 u=zuul n=ansible | Saturday 13 December 2025 07:04:49 +0000 (0:00:00.163) 0:11:45.466 ***** 2025-12-13 07:04:49,337 p=31853 u=zuul n=ansible | skipping: [localhost] 2025-12-13 07:04:49,350 p=31853 u=zuul n=ansible | TASK [cifmw_setup : Load parameters files dir={{ cifmw_basedir }}/artifacts/parameters] *** 2025-12-13 07:04:49,350 p=31853 u=zuul n=ansible | Saturday 13 December 2025 07:04:49 +0000 (0:00:00.032) 0:11:45.499 ***** 2025-12-13 07:04:49,350 p=31853 u=zuul n=ansible | Saturday 13 December 2025 07:04:49 +0000 (0:00:00.032) 0:11:45.498 ***** 2025-12-13 07:04:49,394 p=31853 u=zuul n=ansible | ok: [localhost] 2025-12-13 07:04:49,403 p=31853 u=zuul n=ansible | TASK [edpm_deploy_baremetal : Define minimal set of repo variables when not running on Zuul _install_yamls_repos={{ ( { 'OPENSTACK_REPO': operators_build_output[cifmw_operator_build_meta_name].git_src_dir, 'OPENSTACK_BRANCH': '', 'GIT_CLONE_OPTS': '-l', } if (cifmw_operator_build_meta_name is defined and cifmw_operator_build_meta_name in operators_build_output) else {} ) }}] *** 2025-12-13 07:04:49,403 p=31853 u=zuul n=ansible | Saturday 13 December 2025 07:04:49 +0000 (0:00:00.053) 0:11:45.553 ***** 2025-12-13 07:04:49,403 p=31853 u=zuul n=ansible | Saturday 13 December 2025 07:04:49 +0000 (0:00:00.053) 0:11:45.551 ***** 2025-12-13 07:04:49,423 p=31853 u=zuul n=ansible | skipping: [localhost] 2025-12-13 07:04:49,430 p=31853 u=zuul n=ansible | TASK [edpm_deploy_baremetal : Set install_yamls Makefile environment variables cifmw_edpm_deploy_baremetal_common_env={{ cifmw_install_yamls_environment | combine({'PATH': cifmw_path}) | combine(_install_yamls_repos | default({})) }}, cifmw_edpm_deploy_baremetal_make_openstack_env={{ cifmw_edpm_deploy_baremetal_make_openstack_env | default({}) | combine( { 'OPENSTACK_IMG': operators_build_output[cifmw_operator_build_meta_name].image_catalog, } if (cifmw_operator_build_meta_name is defined and cifmw_operator_build_meta_name in operators_build_output) else {} ) }}, cifmw_edpm_deploy_baremetal_operators_build_output={{ operators_build_output }}] *** 2025-12-13 07:04:49,431 p=31853 u=zuul n=ansible | Saturday 13 December 2025 07:04:49 +0000 (0:00:00.027) 0:11:45.580 ***** 2025-12-13 07:04:49,431 p=31853 u=zuul n=ansible | Saturday 13 December 2025 07:04:49 +0000 (0:00:00.027) 0:11:45.579 ***** 2025-12-13 07:04:49,450 p=31853 u=zuul n=ansible | skipping: [localhost] 2025-12-13 07:04:49,457 p=31853 u=zuul n=ansible | TASK [Create virtual baremetal name=install_yamls_makes, tasks_from=make_edpm_baremetal_compute] *** 2025-12-13 07:04:49,457 p=31853 u=zuul n=ansible | Saturday 13 December 2025 07:04:49 +0000 (0:00:00.026) 0:11:45.607 ***** 2025-12-13 07:04:49,457 p=31853 u=zuul n=ansible | Saturday 13 December 2025 07:04:49 +0000 (0:00:00.026) 0:11:45.606 ***** 2025-12-13 07:04:49,479 p=31853 u=zuul n=ansible | skipping: [localhost] 2025-12-13 07:04:49,486 p=31853 u=zuul n=ansible | TASK [edpm_deploy_baremetal : Create the config file mode=0644, content={{ cifmw_edpm_deploy_baremetal_nova_compute_extra_config }}, dest={{ _cifmw_edpm_deploy_baremetal_nova_extra_config_file }}] *** 2025-12-13 07:04:49,486 p=31853 u=zuul n=ansible | Saturday 13 December 2025 07:04:49 +0000 (0:00:00.028) 0:11:45.636 ***** 2025-12-13 07:04:49,486 p=31853 u=zuul n=ansible | Saturday 13 December 2025 07:04:49 +0000 (0:00:00.028) 0:11:45.634 ***** 2025-12-13 07:04:49,505 p=31853 u=zuul n=ansible | skipping: [localhost] 2025-12-13 07:04:49,513 p=31853 u=zuul n=ansible | TASK [edpm_deploy_baremetal : Define DATAPLANE_EXTRA_NOVA_CONFIG_FILE cifmw_edpm_deploy_baremetal_common_env={{ cifmw_edpm_deploy_baremetal_common_env | default({}) | combine({'DATAPLANE_EXTRA_NOVA_CONFIG_FILE': _cifmw_edpm_deploy_baremetal_nova_extra_config_file }) }}, cacheable=True] *** 2025-12-13 07:04:49,514 p=31853 u=zuul n=ansible | Saturday 13 December 2025 07:04:49 +0000 (0:00:00.027) 0:11:45.663 ***** 2025-12-13 07:04:49,514 p=31853 u=zuul n=ansible | Saturday 13 December 2025 07:04:49 +0000 (0:00:00.027) 0:11:45.662 ***** 2025-12-13 07:04:49,533 p=31853 u=zuul n=ansible | skipping: [localhost] 2025-12-13 07:04:49,540 p=31853 u=zuul n=ansible | TASK [Prepare OpenStack Dataplane NodeSet CR name=install_yamls_makes, tasks_from=make_edpm_deploy_baremetal_prep] *** 2025-12-13 07:04:49,541 p=31853 u=zuul n=ansible | Saturday 13 December 2025 07:04:49 +0000 (0:00:00.026) 0:11:45.690 ***** 2025-12-13 07:04:49,541 p=31853 u=zuul n=ansible | Saturday 13 December 2025 07:04:49 +0000 (0:00:00.026) 0:11:45.689 ***** 2025-12-13 07:04:49,560 p=31853 u=zuul n=ansible | skipping: [localhost] 2025-12-13 07:04:49,569 p=31853 u=zuul n=ansible | TASK [edpm_deploy_baremetal : Perform kustomizations to the OpenStackDataPlaneNodeSet CR target_path={{ cifmw_edpm_deploy_openstack_crs_path }}, sort_ascending=False, kustomizations={% if content_provider_registry_ip is defined or not cifmw_edpm_deploy_baremetal_bootc %} apiVersion: kustomize.config.k8s.io/v1beta1 kind: Kustomization patches: - target: kind: OpenStackDataPlaneNodeSet patch: |- {% if content_provider_registry_ip is defined %} - op: add path: /spec/nodeTemplate/ansible/ansibleVars/edpm_container_registry_insecure_registries value: ["{{ content_provider_registry_ip }}:5001"] {% endif %} {% if not cifmw_edpm_deploy_baremetal_bootc %} - op: add path: /spec/nodeTemplate/ansible/ansibleVars/edpm_bootstrap_command value: sudo dnf -y update {% endif %} {% endif %}, kustomizations_paths={{ [ ( [ cifmw_edpm_deploy_baremetal_manifests_dir, 'kustomizations', 'dataplane' ] | ansible.builtin.path_join ) ] }}] *** 2025-12-13 07:04:49,569 p=31853 u=zuul n=ansible | Saturday 13 December 2025 07:04:49 +0000 (0:00:00.028) 0:11:45.719 ***** 2025-12-13 07:04:49,569 p=31853 u=zuul n=ansible | Saturday 13 December 2025 07:04:49 +0000 (0:00:00.028) 0:11:45.718 ***** 2025-12-13 07:04:49,589 p=31853 u=zuul n=ansible | skipping: [localhost] 2025-12-13 07:04:49,596 p=31853 u=zuul n=ansible | TASK [edpm_deploy_baremetal : Log the CR that is about to be applied var=cifmw_edpm_deploy_baremetal_crs_kustomize_result] *** 2025-12-13 07:04:49,597 p=31853 u=zuul n=ansible | Saturday 13 December 2025 07:04:49 +0000 (0:00:00.027) 0:11:45.746 ***** 2025-12-13 07:04:49,597 p=31853 u=zuul n=ansible | Saturday 13 December 2025 07:04:49 +0000 (0:00:00.027) 0:11:45.745 ***** 2025-12-13 07:04:49,616 p=31853 u=zuul n=ansible | skipping: [localhost] 2025-12-13 07:04:49,624 p=31853 u=zuul n=ansible | TASK [edpm_deploy_baremetal : Create repo-setup-downstream OpenStackDataPlaneService _raw_params=oc apply -n {{ cifmw_install_yamls_defaults['NAMESPACE'] }} -f "{{ cifmw_installyamls_repos }}/devsetup/edpm/services/dataplane_v1beta1_openstackdataplaneservice_reposetup_downstream.yaml"] *** 2025-12-13 07:04:49,624 p=31853 u=zuul n=ansible | Saturday 13 December 2025 07:04:49 +0000 (0:00:00.027) 0:11:45.774 ***** 2025-12-13 07:04:49,624 p=31853 u=zuul n=ansible | Saturday 13 December 2025 07:04:49 +0000 (0:00:00.027) 0:11:45.772 ***** 2025-12-13 07:04:49,643 p=31853 u=zuul n=ansible | skipping: [localhost] 2025-12-13 07:04:49,651 p=31853 u=zuul n=ansible | TASK [edpm_deploy_baremetal : Get list of services defined under OpenStackDataPlaneNodeSet resource _raw_params=yq '.spec.services[]' {{ cifmw_edpm_deploy_baremetal_crs_kustomize_result.output_path }}] *** 2025-12-13 07:04:49,651 p=31853 u=zuul n=ansible | Saturday 13 December 2025 07:04:49 +0000 (0:00:00.027) 0:11:45.801 ***** 2025-12-13 07:04:49,651 p=31853 u=zuul n=ansible | Saturday 13 December 2025 07:04:49 +0000 (0:00:00.027) 0:11:45.799 ***** 2025-12-13 07:04:49,670 p=31853 u=zuul n=ansible | skipping: [localhost] 2025-12-13 07:04:49,678 p=31853 u=zuul n=ansible | TASK [edpm_deploy_baremetal : Patch OpenStackDataPlaneNodeSet resource to add "repo-setup-downstream" service _raw_params=yq -i '.spec.services = ["repo-setup-downstream"] + .spec.services' {{ cifmw_edpm_deploy_baremetal_crs_kustomize_result.output_path }}] *** 2025-12-13 07:04:49,678 p=31853 u=zuul n=ansible | Saturday 13 December 2025 07:04:49 +0000 (0:00:00.026) 0:11:45.828 ***** 2025-12-13 07:04:49,678 p=31853 u=zuul n=ansible | Saturday 13 December 2025 07:04:49 +0000 (0:00:00.026) 0:11:45.826 ***** 2025-12-13 07:04:49,698 p=31853 u=zuul n=ansible | skipping: [localhost] 2025-12-13 07:04:49,705 p=31853 u=zuul n=ansible | TASK [edpm_deploy_baremetal : Patch OpenStackDataPlaneNodeSet resource to replace "repo-setup" with "repo-setup-downstream" service _raw_params=yq -i '(.spec.services[] | select(. == "repo-setup")) |= "repo-setup-downstream"' {{ cifmw_edpm_deploy_baremetal_crs_kustomize_result.output_path }}] *** 2025-12-13 07:04:49,705 p=31853 u=zuul n=ansible | Saturday 13 December 2025 07:04:49 +0000 (0:00:00.027) 0:11:45.855 ***** 2025-12-13 07:04:49,705 p=31853 u=zuul n=ansible | Saturday 13 December 2025 07:04:49 +0000 (0:00:00.027) 0:11:45.853 ***** 2025-12-13 07:04:49,727 p=31853 u=zuul n=ansible | skipping: [localhost] 2025-12-13 07:04:49,734 p=31853 u=zuul n=ansible | TASK [edpm_deploy_baremetal : Apply the OpenStackDataPlaneNodeSet CR output_dir={{ cifmw_edpm_deploy_baremetal_basedir }}/artifacts, script=oc apply -f {{ cifmw_edpm_deploy_baremetal_crs_kustomize_result.output_path }}] *** 2025-12-13 07:04:49,734 p=31853 u=zuul n=ansible | Saturday 13 December 2025 07:04:49 +0000 (0:00:00.029) 0:11:45.884 ***** 2025-12-13 07:04:49,734 p=31853 u=zuul n=ansible | Saturday 13 December 2025 07:04:49 +0000 (0:00:00.029) 0:11:45.883 ***** 2025-12-13 07:04:49,756 p=31853 u=zuul n=ansible | skipping: [localhost] 2025-12-13 07:04:49,763 p=31853 u=zuul n=ansible | TASK [edpm_deploy_baremetal : Wait for Ironic to be ready _raw_params=oc wait pod -l name=ironic -n baremetal-operator-system --for=condition=Ready --timeout={{ cifmw_edpm_deploy_baremetal_wait_ironic_timeout_mins }}m] *** 2025-12-13 07:04:49,763 p=31853 u=zuul n=ansible | Saturday 13 December 2025 07:04:49 +0000 (0:00:00.028) 0:11:45.913 ***** 2025-12-13 07:04:49,763 p=31853 u=zuul n=ansible | Saturday 13 December 2025 07:04:49 +0000 (0:00:00.028) 0:11:45.912 ***** 2025-12-13 07:04:49,784 p=31853 u=zuul n=ansible | skipping: [localhost] 2025-12-13 07:04:49,792 p=31853 u=zuul n=ansible | TASK [edpm_deploy_baremetal : Wait for OpenStack Provision Server pod to be created _raw_params=oc get po -l osp-provisionserver/name=openstack-edpm-ipam-provisionserver -n {{ cifmw_install_yamls_defaults['NAMESPACE'] }} -o name] *** 2025-12-13 07:04:49,792 p=31853 u=zuul n=ansible | Saturday 13 December 2025 07:04:49 +0000 (0:00:00.028) 0:11:45.942 ***** 2025-12-13 07:04:49,792 p=31853 u=zuul n=ansible | Saturday 13 December 2025 07:04:49 +0000 (0:00:00.028) 0:11:45.941 ***** 2025-12-13 07:04:49,813 p=31853 u=zuul n=ansible | skipping: [localhost] 2025-12-13 07:04:49,822 p=31853 u=zuul n=ansible | TASK [edpm_deploy_baremetal : Wait for OpenStack Provision Server deployment to be available _raw_params=oc wait deployment openstack-edpm-ipam-provisionserver-openstackprovisionserver -n {{ cifmw_install_yamls_defaults['NAMESPACE'] }} --for condition=Available --timeout={{ cifmw_edpm_deploy_baremetal_wait_provisionserver_timeout_mins }}m] *** 2025-12-13 07:04:49,822 p=31853 u=zuul n=ansible | Saturday 13 December 2025 07:04:49 +0000 (0:00:00.029) 0:11:45.971 ***** 2025-12-13 07:04:49,822 p=31853 u=zuul n=ansible | Saturday 13 December 2025 07:04:49 +0000 (0:00:00.029) 0:11:45.970 ***** 2025-12-13 07:04:49,844 p=31853 u=zuul n=ansible | skipping: [localhost] 2025-12-13 07:04:49,852 p=31853 u=zuul n=ansible | TASK [edpm_deploy_baremetal : Wait for baremetal nodes to reach 'provisioned' state _raw_params=oc wait bmh --all -n {{ cifmw_install_yamls_defaults['NAMESPACE'] }} --for=jsonpath='{.status.provisioning.state}'=provisioned --timeout={{ cifmw_edpm_deploy_baremetal_wait_bmh_timeout_mins }}m] *** 2025-12-13 07:04:49,852 p=31853 u=zuul n=ansible | Saturday 13 December 2025 07:04:49 +0000 (0:00:00.030) 0:11:46.002 ***** 2025-12-13 07:04:49,852 p=31853 u=zuul n=ansible | Saturday 13 December 2025 07:04:49 +0000 (0:00:00.030) 0:11:46.001 ***** 2025-12-13 07:04:49,874 p=31853 u=zuul n=ansible | skipping: [localhost] 2025-12-13 07:04:49,882 p=31853 u=zuul n=ansible | TASK [edpm_deploy_baremetal : Register the list of compute nodes _raw_params=oc get bmh -n {{ cifmw_install_yamls_defaults['NAMESPACE'] }}] *** 2025-12-13 07:04:49,882 p=31853 u=zuul n=ansible | Saturday 13 December 2025 07:04:49 +0000 (0:00:00.030) 0:11:46.032 ***** 2025-12-13 07:04:49,882 p=31853 u=zuul n=ansible | Saturday 13 December 2025 07:04:49 +0000 (0:00:00.030) 0:11:46.031 ***** 2025-12-13 07:04:49,903 p=31853 u=zuul n=ansible | skipping: [localhost] 2025-12-13 07:04:49,911 p=31853 u=zuul n=ansible | TASK [edpm_deploy_baremetal : Print the list of compute nodes var=compute_nodes_output.stdout_lines] *** 2025-12-13 07:04:49,911 p=31853 u=zuul n=ansible | Saturday 13 December 2025 07:04:49 +0000 (0:00:00.029) 0:11:46.061 ***** 2025-12-13 07:04:49,911 p=31853 u=zuul n=ansible | Saturday 13 December 2025 07:04:49 +0000 (0:00:00.029) 0:11:46.060 ***** 2025-12-13 07:04:49,932 p=31853 u=zuul n=ansible | skipping: [localhost] 2025-12-13 07:04:49,940 p=31853 u=zuul n=ansible | TASK [edpm_deploy_baremetal : Wait for OpenStackDataPlaneNodeSet to be deployed _raw_params=oc wait OpenStackDataPlaneNodeSet {{ cr_name }} --namespace={{ cifmw_install_yamls_defaults['NAMESPACE'] }} --for=condition=ready --timeout={{ cifmw_edpm_deploy_baremetal_wait_dataplane_timeout_mins }}m] *** 2025-12-13 07:04:49,940 p=31853 u=zuul n=ansible | Saturday 13 December 2025 07:04:49 +0000 (0:00:00.028) 0:11:46.090 ***** 2025-12-13 07:04:49,940 p=31853 u=zuul n=ansible | Saturday 13 December 2025 07:04:49 +0000 (0:00:00.028) 0:11:46.089 ***** 2025-12-13 07:04:49,961 p=31853 u=zuul n=ansible | skipping: [localhost] 2025-12-13 07:04:49,970 p=31853 u=zuul n=ansible | TASK [edpm_deploy_baremetal : Run nova-manage discover_hosts to ensure compute nodes are mapped _raw_params=oc rsh -n {{ cifmw_install_yamls_defaults['NAMESPACE'] }} nova-cell0-conductor-0 nova-manage cell_v2 discover_hosts --verbose] *** 2025-12-13 07:04:49,970 p=31853 u=zuul n=ansible | Saturday 13 December 2025 07:04:49 +0000 (0:00:00.029) 0:11:46.120 ***** 2025-12-13 07:04:49,970 p=31853 u=zuul n=ansible | Saturday 13 December 2025 07:04:49 +0000 (0:00:00.029) 0:11:46.118 ***** 2025-12-13 07:04:49,995 p=31853 u=zuul n=ansible | skipping: [localhost] 2025-12-13 07:04:50,008 p=31853 u=zuul n=ansible | TASK [cifmw_setup : Load parameters files dir={{ cifmw_basedir }}/artifacts/parameters] *** 2025-12-13 07:04:50,008 p=31853 u=zuul n=ansible | Saturday 13 December 2025 07:04:50 +0000 (0:00:00.037) 0:11:46.157 ***** 2025-12-13 07:04:50,008 p=31853 u=zuul n=ansible | Saturday 13 December 2025 07:04:50 +0000 (0:00:00.037) 0:11:46.156 ***** 2025-12-13 07:04:50,153 p=31853 u=zuul n=ansible | ok: [localhost] 2025-12-13 07:04:50,161 p=31853 u=zuul n=ansible | TASK [libvirt_manager : Set compute config and common environment facts compute_config={{ cifmw_libvirt_manager_configuration['vms']['compute'] }}, cifmw_libvirt_manager_common_env={{ cifmw_install_yamls_environment | combine({'PATH': cifmw_path }) }}, cacheable=True] *** 2025-12-13 07:04:50,161 p=31853 u=zuul n=ansible | Saturday 13 December 2025 07:04:50 +0000 (0:00:00.153) 0:11:46.311 ***** 2025-12-13 07:04:50,161 p=31853 u=zuul n=ansible | Saturday 13 December 2025 07:04:50 +0000 (0:00:00.153) 0:11:46.310 ***** 2025-12-13 07:04:50,187 p=31853 u=zuul n=ansible | skipping: [localhost] 2025-12-13 07:04:50,195 p=31853 u=zuul n=ansible | TASK [libvirt_manager : Ensure needed directories exist path={{ item }}, state=directory, mode=0755] *** 2025-12-13 07:04:50,195 p=31853 u=zuul n=ansible | Saturday 13 December 2025 07:04:50 +0000 (0:00:00.034) 0:11:46.345 ***** 2025-12-13 07:04:50,195 p=31853 u=zuul n=ansible | Saturday 13 December 2025 07:04:50 +0000 (0:00:00.034) 0:11:46.344 ***** 2025-12-13 07:04:50,234 p=31853 u=zuul n=ansible | skipping: [localhost] => (item=/home/zuul/ci-framework-data/workload) 2025-12-13 07:04:50,244 p=31853 u=zuul n=ansible | skipping: [localhost] => (item=/home/zuul/ci-framework-data/artifacts/edpm_compute) 2025-12-13 07:04:50,252 p=31853 u=zuul n=ansible | skipping: [localhost] => (item=/home/zuul/ci-framework-data/artifacts/openstack/cr/) 2025-12-13 07:04:50,253 p=31853 u=zuul n=ansible | skipping: [localhost] 2025-12-13 07:04:50,260 p=31853 u=zuul n=ansible | TASK [libvirt_manager : Ensure image is available _raw_params=get_image.yml] *** 2025-12-13 07:04:50,260 p=31853 u=zuul n=ansible | Saturday 13 December 2025 07:04:50 +0000 (0:00:00.064) 0:11:46.410 ***** 2025-12-13 07:04:50,260 p=31853 u=zuul n=ansible | Saturday 13 December 2025 07:04:50 +0000 (0:00:00.064) 0:11:46.409 ***** 2025-12-13 07:04:50,285 p=31853 u=zuul n=ansible | skipping: [localhost] 2025-12-13 07:04:50,293 p=31853 u=zuul n=ansible | TASK [Create EDPM compute VMs name=install_yamls_makes, tasks_from=make_edpm_compute.yml] *** 2025-12-13 07:04:50,293 p=31853 u=zuul n=ansible | Saturday 13 December 2025 07:04:50 +0000 (0:00:00.033) 0:11:46.443 ***** 2025-12-13 07:04:50,293 p=31853 u=zuul n=ansible | Saturday 13 December 2025 07:04:50 +0000 (0:00:00.033) 0:11:46.442 ***** 2025-12-13 07:04:50,319 p=31853 u=zuul n=ansible | skipping: [localhost] 2025-12-13 07:04:50,326 p=31853 u=zuul n=ansible | TASK [libvirt_manager : Catch compute IPs _raw_params=virsh -c qemu:///system -q domifaddr --source arp --domain edpm-compute-{{ item }}] *** 2025-12-13 07:04:50,326 p=31853 u=zuul n=ansible | Saturday 13 December 2025 07:04:50 +0000 (0:00:00.032) 0:11:46.476 ***** 2025-12-13 07:04:50,326 p=31853 u=zuul n=ansible | Saturday 13 December 2025 07:04:50 +0000 (0:00:00.032) 0:11:46.475 ***** 2025-12-13 07:04:50,351 p=31853 u=zuul n=ansible | skipping: [localhost] 2025-12-13 07:04:50,359 p=31853 u=zuul n=ansible | TASK [libvirt_manager : Ensure we get SSH host={{ item.stdout.split()[-1].split('/')[0] }}, port=22, timeout=60] *** 2025-12-13 07:04:50,359 p=31853 u=zuul n=ansible | Saturday 13 December 2025 07:04:50 +0000 (0:00:00.032) 0:11:46.509 ***** 2025-12-13 07:04:50,359 p=31853 u=zuul n=ansible | Saturday 13 December 2025 07:04:50 +0000 (0:00:00.032) 0:11:46.508 ***** 2025-12-13 07:04:50,384 p=31853 u=zuul n=ansible | skipping: [localhost] 2025-12-13 07:04:50,392 p=31853 u=zuul n=ansible | TASK [libvirt_manager : Output CR for extra computes dest={{ cifmw_libvirt_manager_basedir }}/artifacts/{{ cifmw_install_yamls_defaults['NAMESPACE'] }}/cr/99-cifmw-computes-{{ item }}.yaml, src=kustomize_compute.yml.j2, mode=0644] *** 2025-12-13 07:04:50,392 p=31853 u=zuul n=ansible | Saturday 13 December 2025 07:04:50 +0000 (0:00:00.032) 0:11:46.542 ***** 2025-12-13 07:04:50,392 p=31853 u=zuul n=ansible | Saturday 13 December 2025 07:04:50 +0000 (0:00:00.032) 0:11:46.540 ***** 2025-12-13 07:04:50,419 p=31853 u=zuul n=ansible | skipping: [localhost] 2025-12-13 07:04:50,433 p=31853 u=zuul n=ansible | TASK [Prepare for HCI deploy phase 1 name=hci_prepare, tasks_from=phase1.yml] *** 2025-12-13 07:04:50,433 p=31853 u=zuul n=ansible | Saturday 13 December 2025 07:04:50 +0000 (0:00:00.041) 0:11:46.583 ***** 2025-12-13 07:04:50,433 p=31853 u=zuul n=ansible | Saturday 13 December 2025 07:04:50 +0000 (0:00:00.041) 0:11:46.582 ***** 2025-12-13 07:04:50,501 p=31853 u=zuul n=ansible | TASK [hci_prepare : Set common facts _cifmw_hci_prepare_namespace={{ cifmw_install_yamls_defaults.NAMESPACE | default(cifmw_hci_prepare_namespace) }}] *** 2025-12-13 07:04:50,502 p=31853 u=zuul n=ansible | Saturday 13 December 2025 07:04:50 +0000 (0:00:00.068) 0:11:46.651 ***** 2025-12-13 07:04:50,502 p=31853 u=zuul n=ansible | Saturday 13 December 2025 07:04:50 +0000 (0:00:00.068) 0:11:46.650 ***** 2025-12-13 07:04:50,529 p=31853 u=zuul n=ansible | ok: [localhost] 2025-12-13 07:04:50,537 p=31853 u=zuul n=ansible | TASK [hci_prepare : Load parameters _raw_params=load_parameters.yml] *********** 2025-12-13 07:04:50,537 p=31853 u=zuul n=ansible | Saturday 13 December 2025 07:04:50 +0000 (0:00:00.034) 0:11:46.686 ***** 2025-12-13 07:04:50,537 p=31853 u=zuul n=ansible | Saturday 13 December 2025 07:04:50 +0000 (0:00:00.034) 0:11:46.685 ***** 2025-12-13 07:04:50,572 p=31853 u=zuul n=ansible | included: /home/zuul/src/github.com/openstack-k8s-operators/ci-framework/roles/hci_prepare/tasks/load_parameters.yml for localhost 2025-12-13 07:04:50,583 p=31853 u=zuul n=ansible | TASK [hci_prepare : Load parameters dir={{ item }}] **************************** 2025-12-13 07:04:50,583 p=31853 u=zuul n=ansible | Saturday 13 December 2025 07:04:50 +0000 (0:00:00.046) 0:11:46.733 ***** 2025-12-13 07:04:50,584 p=31853 u=zuul n=ansible | Saturday 13 December 2025 07:04:50 +0000 (0:00:00.046) 0:11:46.732 ***** 2025-12-13 07:04:50,644 p=31853 u=zuul n=ansible | ok: [localhost] => (item=/home/zuul/ci-framework-data/artifacts/parameters) 2025-12-13 07:04:50,658 p=31853 u=zuul n=ansible | ok: [localhost] => (item=/etc/ci/env) 2025-12-13 07:04:50,673 p=31853 u=zuul n=ansible | TASK [hci_prepare : Extract first compute from inventory _first_compute={{ groups['computes'] | select('match', '^compute.*0$') | first }}] *** 2025-12-13 07:04:50,673 p=31853 u=zuul n=ansible | Saturday 13 December 2025 07:04:50 +0000 (0:00:00.089) 0:11:46.823 ***** 2025-12-13 07:04:50,673 p=31853 u=zuul n=ansible | Saturday 13 December 2025 07:04:50 +0000 (0:00:00.089) 0:11:46.822 ***** 2025-12-13 07:04:50,703 p=31853 u=zuul n=ansible | ok: [localhost] 2025-12-13 07:04:50,711 p=31853 u=zuul n=ansible | TASK [hci_prepare : Ensure we have needed bits for compute when needed that=['_first_compute | length != 0', 'crc_ci_bootstrap_networks_out[_first_compute] is defined', "crc_ci_bootstrap_networks_out[_first_compute]['storage-mgmt'] is defined or crc_ci_bootstrap_networks_out[_first_compute]['storagemgmt'] is defined"]] *** 2025-12-13 07:04:50,711 p=31853 u=zuul n=ansible | Saturday 13 December 2025 07:04:50 +0000 (0:00:00.037) 0:11:46.860 ***** 2025-12-13 07:04:50,711 p=31853 u=zuul n=ansible | Saturday 13 December 2025 07:04:50 +0000 (0:00:00.037) 0:11:46.859 ***** 2025-12-13 07:04:50,742 p=31853 u=zuul n=ansible | ok: [localhost] => changed: false msg: All assertions passed 2025-12-13 07:04:50,750 p=31853 u=zuul n=ansible | TASK [hci_prepare : Set mtu value from crc_ci_bootstrap_networks_out cifmw_hci_prepare_storage_mgmt_mtu={{ crc_ci_bootstrap_networks_out[_first_compute]['storage-mgmt'].mtu | default(crc_ci_bootstrap_networks_out[_first_compute]['storagemgmt'].mtu) }}] *** 2025-12-13 07:04:50,750 p=31853 u=zuul n=ansible | Saturday 13 December 2025 07:04:50 +0000 (0:00:00.039) 0:11:46.899 ***** 2025-12-13 07:04:50,750 p=31853 u=zuul n=ansible | Saturday 13 December 2025 07:04:50 +0000 (0:00:00.039) 0:11:46.898 ***** 2025-12-13 07:04:50,781 p=31853 u=zuul n=ansible | ok: [localhost] 2025-12-13 07:04:50,790 p=31853 u=zuul n=ansible | TASK [hci_prepare : Set vlan value from crc_ci_bootstrap_networks_out cifmw_hci_prepare_storage_mgmt_vlan={{ crc_ci_bootstrap_networks_out[_first_compute]['storage-mgmt'].vlan | default(crc_ci_bootstrap_networks_out[_first_compute]['storagemgmt'].vlan) }}] *** 2025-12-13 07:04:50,791 p=31853 u=zuul n=ansible | Saturday 13 December 2025 07:04:50 +0000 (0:00:00.040) 0:11:46.940 ***** 2025-12-13 07:04:50,791 p=31853 u=zuul n=ansible | Saturday 13 December 2025 07:04:50 +0000 (0:00:00.040) 0:11:46.939 ***** 2025-12-13 07:04:50,820 p=31853 u=zuul n=ansible | ok: [localhost] 2025-12-13 07:04:50,828 p=31853 u=zuul n=ansible | TASK [hci_prepare : Ensure the kustomizations dirs exists path={{ cifmw_hci_prepare_dataplane_dir }}, state=directory, mode=0755] *** 2025-12-13 07:04:50,828 p=31853 u=zuul n=ansible | Saturday 13 December 2025 07:04:50 +0000 (0:00:00.037) 0:11:46.978 ***** 2025-12-13 07:04:50,828 p=31853 u=zuul n=ansible | Saturday 13 December 2025 07:04:50 +0000 (0:00:00.037) 0:11:46.976 ***** 2025-12-13 07:04:51,009 p=31853 u=zuul n=ansible | ok: [localhost] 2025-12-13 07:04:51,017 p=31853 u=zuul n=ansible | TASK [hci_prepare : Prepare EDPM network for HCI deployment mode=0644, dest={{ cifmw_hci_prepare_dataplane_dir }}/89-storage-mgmt-kustomization.yaml, content=apiVersion: kustomize.config.k8s.io/v1beta1 kind: Kustomization resources: namespace: {{ _cifmw_hci_prepare_namespace }} patches: - target: kind: OpenStackDataPlaneNodeSet patch: |- {% for compute_node in groups['computes'] %} - op: add path: /spec/nodes/edpm-{{ compute_node }}/networks/- value: name: StorageMgmt subnetName: subnet1 {% endfor %}] *** 2025-12-13 07:04:51,017 p=31853 u=zuul n=ansible | Saturday 13 December 2025 07:04:51 +0000 (0:00:00.188) 0:11:47.167 ***** 2025-12-13 07:04:51,017 p=31853 u=zuul n=ansible | Saturday 13 December 2025 07:04:51 +0000 (0:00:00.188) 0:11:47.165 ***** 2025-12-13 07:04:51,374 p=31853 u=zuul n=ansible | changed: [localhost] 2025-12-13 07:04:51,382 p=31853 u=zuul n=ansible | TASK [hci_prepare : Enable services needed to deploy Ceph mode=0644, dest={{ cifmw_hci_prepare_dataplane_dir }}/88-hci-pre-kustomization.yaml, content=apiVersion: kustomize.config.k8s.io/v1beta1 kind: Kustomization resources: namespace: {{ _cifmw_hci_prepare_namespace }} patches: - target: kind: OpenStackDataPlaneNodeSet patch: |- - op: replace path: /spec/services value: {% if cifmw_hci_prepare_enable_repo_setup_service|bool %} - repo-setup {% endif %} - bootstrap - configure-network - validate-network - install-os - ceph-hci-pre - configure-os - ssh-known-hosts - run-os - reboot-os] *** 2025-12-13 07:04:51,382 p=31853 u=zuul n=ansible | Saturday 13 December 2025 07:04:51 +0000 (0:00:00.364) 0:11:47.532 ***** 2025-12-13 07:04:51,382 p=31853 u=zuul n=ansible | Saturday 13 December 2025 07:04:51 +0000 (0:00:00.364) 0:11:47.530 ***** 2025-12-13 07:04:51,729 p=31853 u=zuul n=ansible | changed: [localhost] 2025-12-13 07:04:51,737 p=31853 u=zuul n=ansible | TASK [hci_prepare : Disable discover_hosts when deploying hci on phase1 cifmw_edpm_deploy_skip_nova_discover_hosts=True] *** 2025-12-13 07:04:51,737 p=31853 u=zuul n=ansible | Saturday 13 December 2025 07:04:51 +0000 (0:00:00.354) 0:11:47.886 ***** 2025-12-13 07:04:51,737 p=31853 u=zuul n=ansible | Saturday 13 December 2025 07:04:51 +0000 (0:00:00.354) 0:11:47.885 ***** 2025-12-13 07:04:51,763 p=31853 u=zuul n=ansible | ok: [localhost] 2025-12-13 07:04:51,776 p=31853 u=zuul n=ansible | TASK [edpm_deploy : Set EDPM related vars cifmw_edpm_deploy_env={{ cifmw_install_yamls_environment | combine({'PATH': cifmw_path}) | combine({'DATAPLANE_REGISTRY_URL': cifmw_edpm_deploy_registry_url }) | combine({'DATAPLANE_CONTAINER_TAG': cifmw_repo_setup_full_hash | default(cifmw_install_yamls_defaults['DATAPLANE_CONTAINER_TAG']) }) | combine(cifmw_edpm_deploy_extra_vars | default({})) | combine(_install_yamls_repos | default({})) }}, cacheable=True] *** 2025-12-13 07:04:51,776 p=31853 u=zuul n=ansible | Saturday 13 December 2025 07:04:51 +0000 (0:00:00.039) 0:11:47.926 ***** 2025-12-13 07:04:51,776 p=31853 u=zuul n=ansible | Saturday 13 December 2025 07:04:51 +0000 (0:00:00.039) 0:11:47.924 ***** 2025-12-13 07:04:51,811 p=31853 u=zuul n=ansible | ok: [localhost] 2025-12-13 07:04:51,818 p=31853 u=zuul n=ansible | TASK [edpm_deploy : Create the config file mode=0644, content={{ cifmw_edpm_deploy_nova_compute_extra_config }}, dest={{ _cifmw_edpm_deploy_nova_extra_config_file }}] *** 2025-12-13 07:04:51,818 p=31853 u=zuul n=ansible | Saturday 13 December 2025 07:04:51 +0000 (0:00:00.042) 0:11:47.968 ***** 2025-12-13 07:04:51,818 p=31853 u=zuul n=ansible | Saturday 13 December 2025 07:04:51 +0000 (0:00:00.042) 0:11:47.967 ***** 2025-12-13 07:04:52,164 p=31853 u=zuul n=ansible | changed: [localhost] 2025-12-13 07:04:52,172 p=31853 u=zuul n=ansible | TASK [edpm_deploy : Define DATAPLANE_EXTRA_NOVA_CONFIG_FILE cifmw_edpm_deploy_env={{ cifmw_edpm_deploy_env | default({}) | combine({'DATAPLANE_EXTRA_NOVA_CONFIG_FILE': _cifmw_edpm_deploy_nova_extra_config_file }) }}, cacheable=True] *** 2025-12-13 07:04:52,172 p=31853 u=zuul n=ansible | Saturday 13 December 2025 07:04:52 +0000 (0:00:00.353) 0:11:48.322 ***** 2025-12-13 07:04:52,172 p=31853 u=zuul n=ansible | Saturday 13 December 2025 07:04:52 +0000 (0:00:00.353) 0:11:48.320 ***** 2025-12-13 07:04:52,206 p=31853 u=zuul n=ansible | ok: [localhost] 2025-12-13 07:04:52,213 p=31853 u=zuul n=ansible | TASK [Prepare OpenStack Dataplane NodeSet CR name=install_yamls_makes, tasks_from=make_edpm_deploy_prep] *** 2025-12-13 07:04:52,213 p=31853 u=zuul n=ansible | Saturday 13 December 2025 07:04:52 +0000 (0:00:00.040) 0:11:48.363 ***** 2025-12-13 07:04:52,213 p=31853 u=zuul n=ansible | Saturday 13 December 2025 07:04:52 +0000 (0:00:00.040) 0:11:48.361 ***** 2025-12-13 07:04:52,259 p=31853 u=zuul n=ansible | TASK [install_yamls_makes : Debug make_edpm_deploy_prep_env var=make_edpm_deploy_prep_env] *** 2025-12-13 07:04:52,259 p=31853 u=zuul n=ansible | Saturday 13 December 2025 07:04:52 +0000 (0:00:00.046) 0:11:48.409 ***** 2025-12-13 07:04:52,259 p=31853 u=zuul n=ansible | Saturday 13 December 2025 07:04:52 +0000 (0:00:00.046) 0:11:48.407 ***** 2025-12-13 07:04:52,290 p=31853 u=zuul n=ansible | ok: [localhost] => make_edpm_deploy_prep_env: BMO_SETUP: false CHECKOUT_FROM_OPENSTACK_REF: 'true' DATAPLANE_COMPUTE_IP: 192.168.122.100 DATAPLANE_CONTAINER_TAG: c3923531bcda0b0811b2d5053f189beb DATAPLANE_EXTRA_NOVA_CONFIG_FILE: /home/zuul/ci-framework-data/nova-extra-config.conf DATAPLANE_REGISTRY_URL: quay.io/podified-antelope-centos9 DATAPLANE_SINGLE_NODE: 'true' DATAPLANE_SSHD_ALLOWED_RANGES: '[''0.0.0.0/0'']' DATAPLANE_TOTAL_NODES: 1 INSTALL_CERT_MANAGER: false KUBECONFIG: /home/zuul/.crc/machines/crc/kubeconfig OPENSTACK_K8S_BRANCH: main OUT: /home/zuul/ci-framework-data/artifacts/manifests OUTPUT_DIR: /home/zuul/ci-framework-data/artifacts/edpm PATH: /home/zuul/.crc/bin:/home/zuul/.crc/bin/oc:/home/zuul/bin:/home/zuul/.local/bin:/home/zuul/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin SSH_KEY_FILE: /home/zuul/.ssh/id_cifw TEST_BRANCH: '' TEST_REPO: /home/zuul/src/github.com/openstack-k8s-operators/test-operator 2025-12-13 07:04:52,298 p=31853 u=zuul n=ansible | TASK [install_yamls_makes : Debug make_edpm_deploy_prep_params var=make_edpm_deploy_prep_params] *** 2025-12-13 07:04:52,298 p=31853 u=zuul n=ansible | Saturday 13 December 2025 07:04:52 +0000 (0:00:00.038) 0:11:48.447 ***** 2025-12-13 07:04:52,298 p=31853 u=zuul n=ansible | Saturday 13 December 2025 07:04:52 +0000 (0:00:00.038) 0:11:48.446 ***** 2025-12-13 07:04:52,323 p=31853 u=zuul n=ansible | skipping: [localhost] 2025-12-13 07:04:52,331 p=31853 u=zuul n=ansible | TASK [install_yamls_makes : Run edpm_deploy_prep output_dir={{ cifmw_basedir|default(ansible_user_dir ~ '/ci-framework-data') }}/artifacts, chdir=/home/zuul/src/github.com/openstack-k8s-operators/install_yamls, script=make edpm_deploy_prep, dry_run={{ make_edpm_deploy_prep_dryrun|default(false)|bool }}, extra_args={{ dict((make_edpm_deploy_prep_env|default({})), **(make_edpm_deploy_prep_params|default({}))) }}] *** 2025-12-13 07:04:52,331 p=31853 u=zuul n=ansible | Saturday 13 December 2025 07:04:52 +0000 (0:00:00.032) 0:11:48.480 ***** 2025-12-13 07:04:52,331 p=31853 u=zuul n=ansible | Saturday 13 December 2025 07:04:52 +0000 (0:00:00.032) 0:11:48.479 ***** 2025-12-13 07:04:52,380 p=31853 u=zuul n=ansible | Follow script's output here: /home/zuul/ci-framework-data/logs/ci_script_012_run_edpm_deploy.log 2025-12-13 07:04:59,126 p=31853 u=zuul n=ansible | [WARNING]: conditional statements should not include jinja2 templating delimiters such as {{ }} or {% %}. Found: {{ make_edpm_deploy_prep_until | default(true) }} 2025-12-13 07:04:59,128 p=31853 u=zuul n=ansible | changed: [localhost] 2025-12-13 07:04:59,141 p=31853 u=zuul n=ansible | TASK [edpm_deploy : Perform kustomizations to the OpenStackDataPlaneNodeSet CR target_path={{ cifmw_edpm_deploy_openstack_crs_path }}, sort_ascending=False, kustomizations_paths={{ [ ( [ cifmw_edpm_deploy_manifests_dir, 'kustomizations', 'dataplane' ] | ansible.builtin.path_join ) ] }}] *** 2025-12-13 07:04:59,141 p=31853 u=zuul n=ansible | Saturday 13 December 2025 07:04:59 +0000 (0:00:06.810) 0:11:55.291 ***** 2025-12-13 07:04:59,141 p=31853 u=zuul n=ansible | Saturday 13 December 2025 07:04:59 +0000 (0:00:06.810) 0:11:55.290 ***** 2025-12-13 07:04:59,670 p=31853 u=zuul n=ansible | changed: [localhost] 2025-12-13 07:04:59,677 p=31853 u=zuul n=ansible | TASK [edpm_deploy : Log the CR that is about to be applied var=cifmw_edpm_deploy_crs_kustomize_result] *** 2025-12-13 07:04:59,678 p=31853 u=zuul n=ansible | Saturday 13 December 2025 07:04:59 +0000 (0:00:00.536) 0:11:55.827 ***** 2025-12-13 07:04:59,678 p=31853 u=zuul n=ansible | Saturday 13 December 2025 07:04:59 +0000 (0:00:00.536) 0:11:55.826 ***** 2025-12-13 07:04:59,711 p=31853 u=zuul n=ansible | ok: [localhost] => cifmw_edpm_deploy_crs_kustomize_result: changed: true count: 4 failed: false kustomizations_paths: - /home/zuul/ci-framework-data/artifacts/manifests/openstack/dataplane/cr/kustomization.yaml - /home/zuul/ci-framework-data/artifacts/manifests/kustomizations/dataplane/99-kustomization.yaml - /home/zuul/ci-framework-data/artifacts/manifests/kustomizations/dataplane/89-storage-mgmt-kustomization.yaml - /home/zuul/ci-framework-data/artifacts/manifests/kustomizations/dataplane/88-hci-pre-kustomization.yaml output_path: /home/zuul/ci-framework-data/artifacts/manifests/openstack/dataplane/cr/cifmw-kustomization-result.yaml result: - apiVersion: v1 data: network_config_template: | --- {% set mtu_list = [ctlplane_mtu] %} {% for network in nodeset_networks %} {% set _ = mtu_list.append(lookup('vars', networks_lower[network] ~ '_mtu')) %} {%- endfor %} {% set min_viable_mtu = mtu_list | max %} network_config: - type: ovs_bridge name: {{ neutron_physical_bridge_name }} mtu: {{ min_viable_mtu }} use_dhcp: false dns_servers: {{ ctlplane_dns_nameservers }} domain: {{ dns_search_domains }} addresses: - ip_netmask: {{ ctlplane_ip }}/{{ ctlplane_cidr }} routes: {{ ctlplane_host_routes }} members: - type: interface name: nic1 mtu: {{ min_viable_mtu }} # force the MAC address of the bridge to this interface primary: true {% for network in nodeset_networks %} - type: vlan mtu: {{ lookup('vars', networks_lower[network] ~ '_mtu') }} vlan_id: {{ lookup('vars', networks_lower[network] ~ '_vlan_id') }} addresses: - ip_netmask: {{ lookup('vars', networks_lower[network] ~ '_ip') }}/{{ lookup('vars', networks_lower[network] ~ '_cidr') }} routes: {{ lookup('vars', networks_lower[network] ~ '_host_routes') }} {% endfor %} kind: ConfigMap metadata: labels: created-by: install_yamls name: network-config-template-ipam namespace: openstack - apiVersion: v1 data: physical_bridge_name: br-ex public_interface_name: eth0 kind: ConfigMap metadata: labels: created-by: install_yamls name: neutron-edpm-ipam namespace: openstack - apiVersion: v1 data: 25-nova-extra.conf: '' kind: ConfigMap metadata: labels: created-by: install_yamls name: nova-extra-config namespace: openstack - apiVersion: dataplane.openstack.org/v1beta1 kind: OpenStackDataPlaneDeployment metadata: labels: created-by: install_yamls name: edpm-deployment namespace: openstack spec: nodeSets: - openstack-edpm-ipam - apiVersion: dataplane.openstack.org/v1beta1 kind: OpenStackDataPlaneNodeSet metadata: labels: created-by: install_yamls name: openstack-edpm-ipam namespace: openstack spec: env: - name: ANSIBLE_VERBOSITY value: '2' networkAttachments: - ctlplane nodeTemplate: ansible: ansibleUser: zuul ansibleVars: ctlplane_dns_nameservers: - 192.168.122.10 - 1.1.1.1 edpm_container_registry_insecure_registries: - 38.129.56.153:5001 edpm_network_config_debug: true edpm_network_config_template: |- --- {% set mtu_list = [ctlplane_mtu] %} {% for network in nodeset_networks %} {% set _ = mtu_list.append(lookup('vars', networks_lower[network] ~ '_mtu')) %} {%- endfor %} {% set min_viable_mtu = mtu_list | max %} network_config: - type: interface name: nic1 use_dhcp: true mtu: {{ min_viable_mtu }} - type: ovs_bridge name: {{ neutron_physical_bridge_name }} mtu: {{ min_viable_mtu }} use_dhcp: false dns_servers: {{ ctlplane_dns_nameservers }} domain: {{ dns_search_domains }} addresses: - ip_netmask: {{ ctlplane_ip }}/{{ ctlplane_cidr }} routes: {{ ctlplane_host_routes }} members: - type: interface name: nic2 mtu: {{ min_viable_mtu }} # force the MAC address of the bridge to this interface primary: true {% if edpm_network_config_nmstate | bool %} # this ovs_extra configuration fixes OSPRH-17551, but it will be not needed when FDP-1472 is resolved ovs_extra: - "set interface eth1 external-ids:ovn-egress-iface=true" {% endif %} {% for network in nodeset_networks %} - type: vlan mtu: {{ lookup('vars', networks_lower[network] ~ '_mtu') }} vlan_id: {{ lookup('vars', networks_lower[network] ~ '_vlan_id') }} addresses: - ip_netmask: {{ lookup('vars', networks_lower[network] ~ '_ip') }}/{{ lookup('vars', networks_lower[network] ~ '_cidr') }} routes: {{ lookup('vars', networks_lower[network] ~ '_host_routes') }} {% endfor %} edpm_nodes_validation_validate_controllers_icmp: false edpm_nodes_validation_validate_gateway_icmp: false edpm_os_net_config_mappings: net_config_data_lookup: edpm-compute: nic2: eth1 edpm_sshd_allowed_ranges: - 0.0.0.0/0 enable_debug: false gather_facts: false image_prefix: openstack image_tag: c3923531bcda0b0811b2d5053f189beb neutron_public_interface_name: eth1 registry_url: quay.io/podified-antelope-centos9 timesync_ntp_servers: - hostname: pool.ntp.org ansibleVarsFrom: - configMapRef: name: network-config-template-ipam prefix: edpm_ - configMapRef: name: neutron-edpm-ipam prefix: neutron_ ansibleSSHPrivateKeySecret: dataplane-ansible-ssh-private-key-secret nodes: edpm-compute-0: ansible: ansibleHost: 192.168.122.100 hostName: compute-0 networks: - defaultRoute: false fixedIP: 192.168.122.100 name: ctlplane subnetName: subnet1 - name: internalapi subnetName: subnet1 - name: storage subnetName: subnet1 - name: tenant subnetName: subnet1 - name: StorageMgmt subnetName: subnet1 preProvisioned: true services: - repo-setup - bootstrap - configure-network - validate-network - install-os - ceph-hci-pre - configure-os - ssh-known-hosts - run-os - reboot-os tlsEnabled: true 2025-12-13 07:04:59,720 p=31853 u=zuul n=ansible | TASK [edpm_deploy : Apply dataplane resources but ignore DataPlaneDeployment kubeconfig={{ cifmw_openshift_kubeconfig }}, api_key={{ cifmw_openshift_token | default(omit) }}, context={{ cifmw_openshift_context | default(omit) }}, state=present, definition={{ lookup('file', cifmw_edpm_deploy_crs_kustomize_result.output_path) | from_yaml_all | rejectattr('kind', 'search', cifmw_edpm_deploy_step2_kind) }}] *** 2025-12-13 07:04:59,720 p=31853 u=zuul n=ansible | Saturday 13 December 2025 07:04:59 +0000 (0:00:00.042) 0:11:55.870 ***** 2025-12-13 07:04:59,720 p=31853 u=zuul n=ansible | Saturday 13 December 2025 07:04:59 +0000 (0:00:00.042) 0:11:55.868 ***** 2025-12-13 07:05:00,393 p=31853 u=zuul n=ansible | changed: [localhost] 2025-12-13 07:05:00,402 p=31853 u=zuul n=ansible | TASK [edpm_deploy : Wait for OpenStackDataPlaneNodeSet become SetupReady _raw_params=oc wait OpenStackDataPlaneNodeSet {{ cr_name }} --namespace={{ cifmw_install_yamls_defaults['NAMESPACE'] }} --for=condition=SetupReady --timeout={{ cifmw_edpm_deploy_timeout }}m] *** 2025-12-13 07:05:00,402 p=31853 u=zuul n=ansible | Saturday 13 December 2025 07:05:00 +0000 (0:00:00.681) 0:11:56.551 ***** 2025-12-13 07:05:00,402 p=31853 u=zuul n=ansible | Saturday 13 December 2025 07:05:00 +0000 (0:00:00.681) 0:11:56.550 ***** 2025-12-13 07:05:01,105 p=31853 u=zuul n=ansible | changed: [localhost] 2025-12-13 07:05:01,113 p=31853 u=zuul n=ansible | TASK [edpm_deploy : Apply DataPlaneDeployment resource kubeconfig={{ cifmw_openshift_kubeconfig }}, api_key={{ cifmw_openshift_token | default(omit) }}, context={{ cifmw_openshift_context | default(omit) }}, state=present, definition={{ lookup('file', cifmw_edpm_deploy_crs_kustomize_result.output_path) | from_yaml_all | selectattr('kind', 'search', cifmw_edpm_deploy_step2_kind) }}] *** 2025-12-13 07:05:01,113 p=31853 u=zuul n=ansible | Saturday 13 December 2025 07:05:01 +0000 (0:00:00.711) 0:11:57.263 ***** 2025-12-13 07:05:01,113 p=31853 u=zuul n=ansible | Saturday 13 December 2025 07:05:01 +0000 (0:00:00.711) 0:11:57.261 ***** 2025-12-13 07:05:01,746 p=31853 u=zuul n=ansible | changed: [localhost] 2025-12-13 07:05:01,753 p=31853 u=zuul n=ansible | TASK [edpm_deploy : Wait for OpenStackDataPlaneDeployment become Ready _raw_params=oc wait OpenStackDataPlaneDeployment {{ cr_name }} --namespace={{ cifmw_install_yamls_defaults['NAMESPACE'] }} --for=condition=Ready --timeout={{ cifmw_edpm_deploy_timeout }}m] *** 2025-12-13 07:05:01,754 p=31853 u=zuul n=ansible | Saturday 13 December 2025 07:05:01 +0000 (0:00:00.640) 0:11:57.903 ***** 2025-12-13 07:05:01,754 p=31853 u=zuul n=ansible | Saturday 13 December 2025 07:05:01 +0000 (0:00:00.640) 0:11:57.902 ***** 2025-12-13 07:12:26,410 p=31853 u=zuul n=ansible | changed: [localhost] 2025-12-13 07:12:26,417 p=31853 u=zuul n=ansible | TASK [edpm_deploy : Run nova-manage discover_hosts to ensure compute nodes are mapped output_dir={{ cifmw_basedir }}/artifacts, executable=/bin/bash, script=set -xe oc rsh --namespace={{ cifmw_install_yamls_defaults['NAMESPACE'] }} nova-cell0-conductor-0 nova-manage cell_v2 discover_hosts --verbose ] *** 2025-12-13 07:12:26,417 p=31853 u=zuul n=ansible | Saturday 13 December 2025 07:12:26 +0000 (0:07:24.663) 0:19:22.567 ***** 2025-12-13 07:12:26,417 p=31853 u=zuul n=ansible | Saturday 13 December 2025 07:12:26 +0000 (0:07:24.663) 0:19:22.565 ***** 2025-12-13 07:12:26,442 p=31853 u=zuul n=ansible | skipping: [localhost] 2025-12-13 07:12:26,451 p=31853 u=zuul n=ansible | TASK [Validate EDPM name=install_yamls_makes, tasks_from=make_edpm_deploy_instance] *** 2025-12-13 07:12:26,451 p=31853 u=zuul n=ansible | Saturday 13 December 2025 07:12:26 +0000 (0:00:00.033) 0:19:22.600 ***** 2025-12-13 07:12:26,451 p=31853 u=zuul n=ansible | Saturday 13 December 2025 07:12:26 +0000 (0:00:00.033) 0:19:22.599 ***** 2025-12-13 07:12:26,474 p=31853 u=zuul n=ansible | skipping: [localhost] 2025-12-13 07:12:26,513 p=31853 u=zuul n=ansible | PLAY [Deploy NFS server on target nodes] *************************************** 2025-12-13 07:12:26,530 p=31853 u=zuul n=ansible | TASK [cifmw_nfs : Set custom cifmw PATH reusable fact cifmw_path={{ ansible_user_dir }}/.crc/bin:{{ ansible_user_dir }}/.crc/bin/oc:{{ ansible_user_dir }}/bin:{{ ansible_env.PATH }}, cacheable=True] *** 2025-12-13 07:12:26,530 p=31853 u=zuul n=ansible | Saturday 13 December 2025 07:12:26 +0000 (0:00:00.079) 0:19:22.680 ***** 2025-12-13 07:12:26,530 p=31853 u=zuul n=ansible | Saturday 13 December 2025 07:12:26 +0000 (0:00:00.079) 0:19:22.679 ***** 2025-12-13 07:12:26,546 p=31853 u=zuul n=ansible | skipping: [compute-0] 2025-12-13 07:12:26,553 p=31853 u=zuul n=ansible | TASK [cifmw_nfs : Install required packages name=['nfs-utils', 'iptables']] **** 2025-12-13 07:12:26,553 p=31853 u=zuul n=ansible | Saturday 13 December 2025 07:12:26 +0000 (0:00:00.022) 0:19:22.702 ***** 2025-12-13 07:12:26,553 p=31853 u=zuul n=ansible | Saturday 13 December 2025 07:12:26 +0000 (0:00:00.022) 0:19:22.701 ***** 2025-12-13 07:12:26,568 p=31853 u=zuul n=ansible | skipping: [compute-0] 2025-12-13 07:12:26,575 p=31853 u=zuul n=ansible | TASK [cifmw_nfs : Configure nfs to use v4 only path=/etc/nfs.conf, section=nfsd, option=vers3, value=n, backup=True, mode=0644] *** 2025-12-13 07:12:26,575 p=31853 u=zuul n=ansible | Saturday 13 December 2025 07:12:26 +0000 (0:00:00.021) 0:19:22.725 ***** 2025-12-13 07:12:26,575 p=31853 u=zuul n=ansible | Saturday 13 December 2025 07:12:26 +0000 (0:00:00.022) 0:19:22.723 ***** 2025-12-13 07:12:26,590 p=31853 u=zuul n=ansible | skipping: [compute-0] 2025-12-13 07:12:26,596 p=31853 u=zuul n=ansible | TASK [cifmw_nfs : Disable NFSv3-related services name={{ item }}, masked=True] *** 2025-12-13 07:12:26,597 p=31853 u=zuul n=ansible | Saturday 13 December 2025 07:12:26 +0000 (0:00:00.021) 0:19:22.746 ***** 2025-12-13 07:12:26,597 p=31853 u=zuul n=ansible | Saturday 13 December 2025 07:12:26 +0000 (0:00:00.021) 0:19:22.745 ***** 2025-12-13 07:12:26,616 p=31853 u=zuul n=ansible | skipping: [compute-0] => (item=rpc-statd.service) 2025-12-13 07:12:26,620 p=31853 u=zuul n=ansible | skipping: [compute-0] => (item=rpcbind.service) 2025-12-13 07:12:26,623 p=31853 u=zuul n=ansible | skipping: [compute-0] => (item=rpcbind.socket) 2025-12-13 07:12:26,624 p=31853 u=zuul n=ansible | skipping: [compute-0] 2025-12-13 07:12:26,631 p=31853 u=zuul n=ansible | TASK [cifmw_nfs : Ensure shared folder exist path=/data/{{ item }}, state=directory, mode=755] *** 2025-12-13 07:12:26,631 p=31853 u=zuul n=ansible | Saturday 13 December 2025 07:12:26 +0000 (0:00:00.034) 0:19:22.780 ***** 2025-12-13 07:12:26,631 p=31853 u=zuul n=ansible | Saturday 13 December 2025 07:12:26 +0000 (0:00:00.034) 0:19:22.779 ***** 2025-12-13 07:12:26,647 p=31853 u=zuul n=ansible | skipping: [compute-0] 2025-12-13 07:12:26,653 p=31853 u=zuul n=ansible | TASK [cifmw_nfs : Set nfs network vars _raw_params=oc get ipset {{ _nfs_host }} -n {{ _ipset_namespace }} -o jsonpath='{.status.reservations[?(@.network=="{{ _nfs_network_name }}")]}'] *** 2025-12-13 07:12:26,653 p=31853 u=zuul n=ansible | Saturday 13 December 2025 07:12:26 +0000 (0:00:00.022) 0:19:22.803 ***** 2025-12-13 07:12:26,654 p=31853 u=zuul n=ansible | Saturday 13 December 2025 07:12:26 +0000 (0:00:00.022) 0:19:22.802 ***** 2025-12-13 07:12:26,671 p=31853 u=zuul n=ansible | skipping: [compute-0] 2025-12-13 07:12:26,678 p=31853 u=zuul n=ansible | TASK [cifmw_nfs : Store nfs network vars dest={{ cifmw_basedir }}/artifacts/parameters/nfs-params.yml, content={{ { 'cifmw_nfs_ip': cifmw_nfs_network_out.stdout | from_json | json_query('address'), 'cifmw_nfs_network_range': cifmw_nfs_network_out.stdout | from_json | json_query('cidr') } | to_nice_yaml }}, mode=0644] *** 2025-12-13 07:12:26,678 p=31853 u=zuul n=ansible | Saturday 13 December 2025 07:12:26 +0000 (0:00:00.024) 0:19:22.828 ***** 2025-12-13 07:12:26,678 p=31853 u=zuul n=ansible | Saturday 13 December 2025 07:12:26 +0000 (0:00:00.024) 0:19:22.826 ***** 2025-12-13 07:12:26,696 p=31853 u=zuul n=ansible | skipping: [compute-0] 2025-12-13 07:12:26,702 p=31853 u=zuul n=ansible | TASK [cifmw_nfs : Generate nftables rules file content=add rule inet filter EDPM_INPUT tcp dport 2049 accept , dest={{ nftables_path }}/nfs-server.nft, mode=0666] *** 2025-12-13 07:12:26,703 p=31853 u=zuul n=ansible | Saturday 13 December 2025 07:12:26 +0000 (0:00:00.024) 0:19:22.852 ***** 2025-12-13 07:12:26,703 p=31853 u=zuul n=ansible | Saturday 13 December 2025 07:12:26 +0000 (0:00:00.024) 0:19:22.851 ***** 2025-12-13 07:12:26,717 p=31853 u=zuul n=ansible | skipping: [compute-0] 2025-12-13 07:12:26,725 p=31853 u=zuul n=ansible | TASK [cifmw_nfs : Update nftables.conf and include nfs rules at the bottom path={{ nftables_conf }}, line=include "{{ nftables_path }}/nfs-server.nft", insertafter=EOF] *** 2025-12-13 07:12:26,725 p=31853 u=zuul n=ansible | Saturday 13 December 2025 07:12:26 +0000 (0:00:00.022) 0:19:22.875 ***** 2025-12-13 07:12:26,726 p=31853 u=zuul n=ansible | Saturday 13 December 2025 07:12:26 +0000 (0:00:00.022) 0:19:22.874 ***** 2025-12-13 07:12:26,743 p=31853 u=zuul n=ansible | skipping: [compute-0] 2025-12-13 07:12:26,750 p=31853 u=zuul n=ansible | TASK [cifmw_nfs : Restart nftables service name=nftables, state=restarted] ***** 2025-12-13 07:12:26,750 p=31853 u=zuul n=ansible | Saturday 13 December 2025 07:12:26 +0000 (0:00:00.024) 0:19:22.900 ***** 2025-12-13 07:12:26,750 p=31853 u=zuul n=ansible | Saturday 13 December 2025 07:12:26 +0000 (0:00:00.024) 0:19:22.898 ***** 2025-12-13 07:12:26,765 p=31853 u=zuul n=ansible | skipping: [compute-0] 2025-12-13 07:12:26,772 p=31853 u=zuul n=ansible | TASK [cifmw_nfs : Configure the ip the nfs server should listen on path=/etc/nfs.conf, section=nfsd, option=host, value={{ cifmw_nfs_network_out.stdout | from_json | json_query('address') }}, backup=True, mode=0644] *** 2025-12-13 07:12:26,772 p=31853 u=zuul n=ansible | Saturday 13 December 2025 07:12:26 +0000 (0:00:00.022) 0:19:22.922 ***** 2025-12-13 07:12:26,772 p=31853 u=zuul n=ansible | Saturday 13 December 2025 07:12:26 +0000 (0:00:00.022) 0:19:22.921 ***** 2025-12-13 07:12:26,787 p=31853 u=zuul n=ansible | skipping: [compute-0] 2025-12-13 07:12:26,794 p=31853 u=zuul n=ansible | TASK [cifmw_nfs : Enable and restart nfs-server service name=nfs-server, state=restarted, enabled=True] *** 2025-12-13 07:12:26,794 p=31853 u=zuul n=ansible | Saturday 13 December 2025 07:12:26 +0000 (0:00:00.021) 0:19:22.944 ***** 2025-12-13 07:12:26,794 p=31853 u=zuul n=ansible | Saturday 13 December 2025 07:12:26 +0000 (0:00:00.021) 0:19:22.942 ***** 2025-12-13 07:12:26,809 p=31853 u=zuul n=ansible | skipping: [compute-0] 2025-12-13 07:12:26,815 p=31853 u=zuul n=ansible | TASK [cifmw_nfs : Add shares to /etc/exports path=/etc/exports, line=/data/{{ item }} {{ cifmw_nfs_network_out.stdout | from_json | json_query('cidr') }}(rw,sync,no_root_squash)] *** 2025-12-13 07:12:26,815 p=31853 u=zuul n=ansible | Saturday 13 December 2025 07:12:26 +0000 (0:00:00.021) 0:19:22.965 ***** 2025-12-13 07:12:26,816 p=31853 u=zuul n=ansible | Saturday 13 December 2025 07:12:26 +0000 (0:00:00.021) 0:19:22.964 ***** 2025-12-13 07:12:26,831 p=31853 u=zuul n=ansible | skipping: [compute-0] 2025-12-13 07:12:26,839 p=31853 u=zuul n=ansible | TASK [cifmw_nfs : Export the shares _raw_params=exportfs -a] ******************* 2025-12-13 07:12:26,839 p=31853 u=zuul n=ansible | Saturday 13 December 2025 07:12:26 +0000 (0:00:00.023) 0:19:22.989 ***** 2025-12-13 07:12:26,839 p=31853 u=zuul n=ansible | Saturday 13 December 2025 07:12:26 +0000 (0:00:00.023) 0:19:22.988 ***** 2025-12-13 07:12:26,916 p=31853 u=zuul n=ansible | skipping: [compute-0] 2025-12-13 07:12:26,943 p=31853 u=zuul n=ansible | PLAY [Clear ceph target hosts facts to force refreshing in HCI deployments] **** 2025-12-13 07:12:26,958 p=31853 u=zuul n=ansible | TASK [Early end if architecture deploy _raw_params=end_play] ******************* 2025-12-13 07:12:26,958 p=31853 u=zuul n=ansible | Saturday 13 December 2025 07:12:26 +0000 (0:00:00.118) 0:19:23.108 ***** 2025-12-13 07:12:26,958 p=31853 u=zuul n=ansible | Saturday 13 December 2025 07:12:26 +0000 (0:00:00.118) 0:19:23.107 ***** 2025-12-13 07:12:26,967 p=31853 u=zuul n=ansible | skipping: [compute-0] 2025-12-13 07:12:26,971 p=31853 u=zuul n=ansible | TASK [Clear ceph target hosts facts _raw_params=clear_facts] ******************* 2025-12-13 07:12:26,971 p=31853 u=zuul n=ansible | Saturday 13 December 2025 07:12:26 +0000 (0:00:00.013) 0:19:23.121 ***** 2025-12-13 07:12:26,971 p=31853 u=zuul n=ansible | Saturday 13 December 2025 07:12:26 +0000 (0:00:00.013) 0:19:23.120 ***** 2025-12-13 07:12:26,989 p=31853 u=zuul n=ansible | PLAY [Deploy ceph using hooks] ************************************************* 2025-12-13 07:12:27,004 p=31853 u=zuul n=ansible | TASK [run_hook : Assert parameters are valid quiet=True, that=['_list_hooks is not string', '_list_hooks is not mapping', '_list_hooks is iterable', '(hooks | default([])) is not string', '(hooks | default([])) is not mapping', '(hooks | default([])) is iterable']] *** 2025-12-13 07:12:27,005 p=31853 u=zuul n=ansible | Saturday 13 December 2025 07:12:27 +0000 (0:00:00.033) 0:19:23.154 ***** 2025-12-13 07:12:27,005 p=31853 u=zuul n=ansible | Saturday 13 December 2025 07:12:27 +0000 (0:00:00.033) 0:19:23.153 ***** 2025-12-13 07:12:27,048 p=31853 u=zuul n=ansible | ok: [localhost] 2025-12-13 07:12:27,054 p=31853 u=zuul n=ansible | TASK [run_hook : Assert single hooks are all mappings quiet=True, that=['_not_mapping_hooks | length == 0'], msg=All single hooks must be a list of mappings or a mapping.] *** 2025-12-13 07:12:27,055 p=31853 u=zuul n=ansible | Saturday 13 December 2025 07:12:27 +0000 (0:00:00.049) 0:19:23.204 ***** 2025-12-13 07:12:27,055 p=31853 u=zuul n=ansible | Saturday 13 December 2025 07:12:27 +0000 (0:00:00.049) 0:19:23.203 ***** 2025-12-13 07:12:27,123 p=31853 u=zuul n=ansible | ok: [localhost] 2025-12-13 07:12:27,130 p=31853 u=zuul n=ansible | TASK [run_hook : Loop on hooks for post_ceph _raw_params={{ hook.type }}.yml] *** 2025-12-13 07:12:27,130 p=31853 u=zuul n=ansible | Saturday 13 December 2025 07:12:27 +0000 (0:00:00.075) 0:19:23.280 ***** 2025-12-13 07:12:27,130 p=31853 u=zuul n=ansible | Saturday 13 December 2025 07:12:27 +0000 (0:00:00.075) 0:19:23.279 ***** 2025-12-13 07:12:27,217 p=31853 u=zuul n=ansible | included: /home/zuul/src/github.com/openstack-k8s-operators/ci-framework/roles/run_hook/tasks/playbook.yml for localhost => (item={'name': '80 Run Ceph hook playbook', 'type': 'playbook', 'source': 'ceph.yml'}) 2025-12-13 07:12:27,227 p=31853 u=zuul n=ansible | TASK [run_hook : Set playbook path for 80 Run Ceph hook playbook cifmw_basedir={{ _bdir }}, hook_name={{ _hook_name }}, playbook_path={{ _play | realpath }}, log_path={{ _bdir }}/logs/{{ step }}_{{ _hook_name }}.log, extra_vars=-e operator_namespace={{ _operator_namespace }} -e namespace={{ _namespace}} {%- if hook.extra_vars is defined and hook.extra_vars|length > 0 -%} {% for key,value in hook.extra_vars.items() -%} {%- if key == 'file' %} -e "@{{ value }}" {%- else %} -e "{{ key }}={{ value }}" {%- endif %} {%- endfor %} {%- endif %}] *** 2025-12-13 07:12:27,227 p=31853 u=zuul n=ansible | Saturday 13 December 2025 07:12:27 +0000 (0:00:00.096) 0:19:23.376 ***** 2025-12-13 07:12:27,227 p=31853 u=zuul n=ansible | Saturday 13 December 2025 07:12:27 +0000 (0:00:00.096) 0:19:23.375 ***** 2025-12-13 07:12:27,262 p=31853 u=zuul n=ansible | ok: [localhost] 2025-12-13 07:12:27,269 p=31853 u=zuul n=ansible | TASK [run_hook : Get file stat path={{ playbook_path }}] *********************** 2025-12-13 07:12:27,269 p=31853 u=zuul n=ansible | Saturday 13 December 2025 07:12:27 +0000 (0:00:00.042) 0:19:23.419 ***** 2025-12-13 07:12:27,269 p=31853 u=zuul n=ansible | Saturday 13 December 2025 07:12:27 +0000 (0:00:00.042) 0:19:23.418 ***** 2025-12-13 07:12:27,434 p=31853 u=zuul n=ansible | ok: [localhost] 2025-12-13 07:12:27,442 p=31853 u=zuul n=ansible | TASK [run_hook : Fail if playbook doesn't exist msg=Playbook {{ playbook_path }} doesn't seem to exist.] *** 2025-12-13 07:12:27,442 p=31853 u=zuul n=ansible | Saturday 13 December 2025 07:12:27 +0000 (0:00:00.172) 0:19:23.591 ***** 2025-12-13 07:12:27,442 p=31853 u=zuul n=ansible | Saturday 13 December 2025 07:12:27 +0000 (0:00:00.172) 0:19:23.590 ***** 2025-12-13 07:12:27,454 p=31853 u=zuul n=ansible | skipping: [localhost] 2025-12-13 07:12:27,462 p=31853 u=zuul n=ansible | TASK [run_hook : Get parameters files paths={{ (cifmw_basedir, 'artifacts/parameters') | path_join }}, file_type=file, patterns=*.yml] *** 2025-12-13 07:12:27,462 p=31853 u=zuul n=ansible | Saturday 13 December 2025 07:12:27 +0000 (0:00:00.020) 0:19:23.611 ***** 2025-12-13 07:12:27,462 p=31853 u=zuul n=ansible | Saturday 13 December 2025 07:12:27 +0000 (0:00:00.020) 0:19:23.610 ***** 2025-12-13 07:12:27,617 p=31853 u=zuul n=ansible | ok: [localhost] 2025-12-13 07:12:27,624 p=31853 u=zuul n=ansible | TASK [run_hook : Add parameters artifacts as extra variables extra_vars={{ extra_vars }} {% for file in cifmw_run_hook_parameters_files.files %} -e "@{{ file.path }}" {%- endfor %}] *** 2025-12-13 07:12:27,624 p=31853 u=zuul n=ansible | Saturday 13 December 2025 07:12:27 +0000 (0:00:00.161) 0:19:23.773 ***** 2025-12-13 07:12:27,624 p=31853 u=zuul n=ansible | Saturday 13 December 2025 07:12:27 +0000 (0:00:00.161) 0:19:23.772 ***** 2025-12-13 07:12:27,640 p=31853 u=zuul n=ansible | ok: [localhost] 2025-12-13 07:12:27,647 p=31853 u=zuul n=ansible | TASK [run_hook : Ensure log directory exists path={{ log_path | dirname }}, state=directory, mode=0755] *** 2025-12-13 07:12:27,647 p=31853 u=zuul n=ansible | Saturday 13 December 2025 07:12:27 +0000 (0:00:00.023) 0:19:23.796 ***** 2025-12-13 07:12:27,647 p=31853 u=zuul n=ansible | Saturday 13 December 2025 07:12:27 +0000 (0:00:00.023) 0:19:23.795 ***** 2025-12-13 07:12:27,801 p=31853 u=zuul n=ansible | ok: [localhost] 2025-12-13 07:12:27,809 p=31853 u=zuul n=ansible | TASK [run_hook : Ensure artifacts directory exists path={{ cifmw_basedir }}/artifacts, state=directory, mode=0755] *** 2025-12-13 07:12:27,809 p=31853 u=zuul n=ansible | Saturday 13 December 2025 07:12:27 +0000 (0:00:00.162) 0:19:23.959 ***** 2025-12-13 07:12:27,809 p=31853 u=zuul n=ansible | Saturday 13 December 2025 07:12:27 +0000 (0:00:00.162) 0:19:23.957 ***** 2025-12-13 07:12:27,968 p=31853 u=zuul n=ansible | ok: [localhost] 2025-12-13 07:12:27,976 p=31853 u=zuul n=ansible | TASK [run_hook : Run hook without retry - 80 Run Ceph hook playbook] *********** 2025-12-13 07:12:27,977 p=31853 u=zuul n=ansible | Saturday 13 December 2025 07:12:27 +0000 (0:00:00.167) 0:19:24.126 ***** 2025-12-13 07:12:27,977 p=31853 u=zuul n=ansible | Saturday 13 December 2025 07:12:27 +0000 (0:00:00.167) 0:19:24.125 ***** 2025-12-13 07:12:28,019 p=31853 u=zuul n=ansible | Follow script's output here: /home/zuul/ci-framework-data/logs/ci_script_013_run_hook_without_retry_80_run.log 2025-12-13 07:15:45,148 p=31853 u=zuul n=ansible | changed: [localhost] 2025-12-13 07:15:45,155 p=31853 u=zuul n=ansible | TASK [run_hook : Run hook with retry - 80 Run Ceph hook playbook] ************** 2025-12-13 07:15:45,155 p=31853 u=zuul n=ansible | Saturday 13 December 2025 07:15:45 +0000 (0:03:17.178) 0:22:41.304 ***** 2025-12-13 07:15:45,155 p=31853 u=zuul n=ansible | Saturday 13 December 2025 07:15:45 +0000 (0:03:17.178) 0:22:41.303 ***** 2025-12-13 07:15:45,170 p=31853 u=zuul n=ansible | skipping: [localhost] 2025-12-13 07:15:45,178 p=31853 u=zuul n=ansible | TASK [run_hook : Check if we have a file path={{ cifmw_basedir }}/artifacts/{{ step }}_{{ hook_name }}.yml] *** 2025-12-13 07:15:45,178 p=31853 u=zuul n=ansible | Saturday 13 December 2025 07:15:45 +0000 (0:00:00.023) 0:22:41.327 ***** 2025-12-13 07:15:45,178 p=31853 u=zuul n=ansible | Saturday 13 December 2025 07:15:45 +0000 (0:00:00.023) 0:22:41.326 ***** 2025-12-13 07:15:45,323 p=31853 u=zuul n=ansible | ok: [localhost] 2025-12-13 07:15:45,330 p=31853 u=zuul n=ansible | TASK [run_hook : Load generated content in main playbook file={{ cifmw_basedir }}/artifacts/{{ step }}_{{ hook_name }}.yml] *** 2025-12-13 07:15:45,330 p=31853 u=zuul n=ansible | Saturday 13 December 2025 07:15:45 +0000 (0:00:00.152) 0:22:41.480 ***** 2025-12-13 07:15:45,330 p=31853 u=zuul n=ansible | Saturday 13 December 2025 07:15:45 +0000 (0:00:00.152) 0:22:41.479 ***** 2025-12-13 07:15:45,344 p=31853 u=zuul n=ansible | skipping: [localhost] 2025-12-13 07:15:45,373 p=31853 u=zuul n=ansible | PLAY [Continue HCI deploy, deploy architecture and validate workflow] ********** 2025-12-13 07:15:45,400 p=31853 u=zuul n=ansible | TASK [Prepare for HCI deploy phase 2 name=hci_prepare, tasks_from=phase2.yml] *** 2025-12-13 07:15:45,400 p=31853 u=zuul n=ansible | Saturday 13 December 2025 07:15:45 +0000 (0:00:00.069) 0:22:41.550 ***** 2025-12-13 07:15:45,400 p=31853 u=zuul n=ansible | Saturday 13 December 2025 07:15:45 +0000 (0:00:00.069) 0:22:41.548 ***** 2025-12-13 07:15:45,443 p=31853 u=zuul n=ansible | TASK [hci_prepare : Set common facts _cifmw_hci_prepare_namespace={{ cifmw_install_yamls_defaults.NAMESPACE | default(cifmw_hci_prepare_namespace) }}] *** 2025-12-13 07:15:45,444 p=31853 u=zuul n=ansible | Saturday 13 December 2025 07:15:45 +0000 (0:00:00.043) 0:22:41.593 ***** 2025-12-13 07:15:45,444 p=31853 u=zuul n=ansible | Saturday 13 December 2025 07:15:45 +0000 (0:00:00.043) 0:22:41.592 ***** 2025-12-13 07:15:45,465 p=31853 u=zuul n=ansible | ok: [localhost] 2025-12-13 07:15:45,472 p=31853 u=zuul n=ansible | TASK [hci_prepare : Ensure directories path={{ item }}, state=directory, mode=0755] *** 2025-12-13 07:15:45,472 p=31853 u=zuul n=ansible | Saturday 13 December 2025 07:15:45 +0000 (0:00:00.028) 0:22:41.622 ***** 2025-12-13 07:15:45,472 p=31853 u=zuul n=ansible | Saturday 13 December 2025 07:15:45 +0000 (0:00:00.028) 0:22:41.621 ***** 2025-12-13 07:15:45,644 p=31853 u=zuul n=ansible | ok: [localhost] => (item=/home/zuul/ci-framework-data/artifacts) 2025-12-13 07:15:45,796 p=31853 u=zuul n=ansible | ok: [localhost] => (item=/home/zuul/ci-framework-data/artifacts/manifests/kustomizations/dataplane) 2025-12-13 07:15:45,804 p=31853 u=zuul n=ansible | TASK [hci_prepare : Create ceph config secret output_dir={{ cifmw_hci_prepare_basedir }}/artifacts, script=oc apply -f {{ cifmw_hci_prepare_ceph_secret_path }}] *** 2025-12-13 07:15:45,804 p=31853 u=zuul n=ansible | Saturday 13 December 2025 07:15:45 +0000 (0:00:00.331) 0:22:41.953 ***** 2025-12-13 07:15:45,804 p=31853 u=zuul n=ansible | Saturday 13 December 2025 07:15:45 +0000 (0:00:00.331) 0:22:41.952 ***** 2025-12-13 07:15:45,845 p=31853 u=zuul n=ansible | Follow script's output here: /home/zuul/ci-framework-data/logs/ci_script_016_create_ceph_config.log 2025-12-13 07:15:46,009 p=31853 u=zuul n=ansible | changed: [localhost] 2025-12-13 07:15:46,016 p=31853 u=zuul n=ansible | TASK [hci_prepare : Set Ceph FSID fact cifmw_hci_prepare_ceph_fsid={{ (lookup('template', cifmw_hci_prepare_ceph_secret_path)|from_yaml).data['ceph.conf'] | b64decode | regex_search('fsid = (.*)', '\1') | first | trim }}] *** 2025-12-13 07:15:46,016 p=31853 u=zuul n=ansible | Saturday 13 December 2025 07:15:46 +0000 (0:00:00.212) 0:22:42.166 ***** 2025-12-13 07:15:46,016 p=31853 u=zuul n=ansible | Saturday 13 December 2025 07:15:46 +0000 (0:00:00.212) 0:22:42.165 ***** 2025-12-13 07:15:46,090 p=31853 u=zuul n=ansible | ok: [localhost] 2025-12-13 07:15:46,097 p=31853 u=zuul n=ansible | TASK [hci_prepare : Generate nova config map src=templates/configmap-ceph-nova.yml.j2, dest={{ cifmw_hci_prepare_basedir }}/artifacts/configmap-ceph-nova.yml, mode=0644] *** 2025-12-13 07:15:46,097 p=31853 u=zuul n=ansible | Saturday 13 December 2025 07:15:46 +0000 (0:00:00.080) 0:22:42.246 ***** 2025-12-13 07:15:46,097 p=31853 u=zuul n=ansible | Saturday 13 December 2025 07:15:46 +0000 (0:00:00.080) 0:22:42.245 ***** 2025-12-13 07:15:46,431 p=31853 u=zuul n=ansible | changed: [localhost] 2025-12-13 07:15:46,438 p=31853 u=zuul n=ansible | TASK [hci_prepare : Create nova config map output_dir={{ cifmw_hci_prepare_basedir }}/artifacts, script=oc apply -f {{ cifmw_hci_prepare_basedir }}/artifacts/configmap-ceph-nova.yml] *** 2025-12-13 07:15:46,438 p=31853 u=zuul n=ansible | Saturday 13 December 2025 07:15:46 +0000 (0:00:00.341) 0:22:42.588 ***** 2025-12-13 07:15:46,438 p=31853 u=zuul n=ansible | Saturday 13 December 2025 07:15:46 +0000 (0:00:00.341) 0:22:42.587 ***** 2025-12-13 07:15:46,478 p=31853 u=zuul n=ansible | Follow script's output here: /home/zuul/ci-framework-data/logs/ci_script_017_create_nova_config.log 2025-12-13 07:15:46,647 p=31853 u=zuul n=ansible | changed: [localhost] 2025-12-13 07:15:46,655 p=31853 u=zuul n=ansible | TASK [hci_prepare : Generate Ceph-Nova Dataplane Service src=templates/dpservice-nova-custom-ceph.yml.j2, dest={{ cifmw_hci_prepare_basedir }}/artifacts/dpservice-nova-custom-ceph.yml, mode=0644] *** 2025-12-13 07:15:46,655 p=31853 u=zuul n=ansible | Saturday 13 December 2025 07:15:46 +0000 (0:00:00.216) 0:22:42.804 ***** 2025-12-13 07:15:46,655 p=31853 u=zuul n=ansible | Saturday 13 December 2025 07:15:46 +0000 (0:00:00.216) 0:22:42.803 ***** 2025-12-13 07:15:46,999 p=31853 u=zuul n=ansible | changed: [localhost] 2025-12-13 07:15:47,007 p=31853 u=zuul n=ansible | TASK [hci_prepare : Create Ceph-Nova Dataplane Service output_dir={{ cifmw_hci_prepare_basedir }}/artifacts, script=oc apply -f {{ cifmw_hci_prepare_basedir }}/artifacts/dpservice-nova-custom-ceph.yml] *** 2025-12-13 07:15:47,007 p=31853 u=zuul n=ansible | Saturday 13 December 2025 07:15:47 +0000 (0:00:00.352) 0:22:43.157 ***** 2025-12-13 07:15:47,007 p=31853 u=zuul n=ansible | Saturday 13 December 2025 07:15:47 +0000 (0:00:00.352) 0:22:43.156 ***** 2025-12-13 07:15:47,050 p=31853 u=zuul n=ansible | Follow script's output here: /home/zuul/ci-framework-data/logs/ci_script_018_create_ceph_nova_dataplane.log 2025-12-13 07:15:47,224 p=31853 u=zuul n=ansible | changed: [localhost] 2025-12-13 07:15:47,231 p=31853 u=zuul n=ansible | TASK [hci_prepare : Delete OpenStackDataPlaneDeployment output_dir={{ cifmw_hci_prepare_basedir }}/artifacts, script=oc delete OpenStackDataPlaneDeployment --all -n {{ _cifmw_hci_prepare_namespace }}] *** 2025-12-13 07:15:47,231 p=31853 u=zuul n=ansible | Saturday 13 December 2025 07:15:47 +0000 (0:00:00.223) 0:22:43.381 ***** 2025-12-13 07:15:47,231 p=31853 u=zuul n=ansible | Saturday 13 December 2025 07:15:47 +0000 (0:00:00.223) 0:22:43.379 ***** 2025-12-13 07:15:47,271 p=31853 u=zuul n=ansible | Follow script's output here: /home/zuul/ci-framework-data/logs/ci_script_019_delete.log 2025-12-13 07:15:47,450 p=31853 u=zuul n=ansible | changed: [localhost] 2025-12-13 07:15:47,458 p=31853 u=zuul n=ansible | TASK [hci_prepare : Create configuration to finish HCI deployment mode=0644, dest={{ cifmw_hci_prepare_dataplane_dir }}/87-hci-post-kustomization.yaml, content=apiVersion: kustomize.config.k8s.io/v1beta1 kind: Kustomization resources: namespace: {{ _cifmw_hci_prepare_namespace }} patches: - target: kind: OpenStackDataPlaneNodeSet patch: |- - op: add path: /spec/nodeTemplate/extraMounts value: - extraVolType: Ceph volumes: - name: ceph secret: secretName: ceph-conf-files mounts: - name: ceph mountPath: "/etc/ceph" readOnly: true - op: replace path: /spec/services value: {% if cifmw_hci_prepare_enable_repo_setup_service|bool %} - repo-setup {% endif %} - bootstrap - configure-network - validate-network - install-os - ceph-hci-pre - configure-os - ssh-known-hosts - run-os - reboot-os - install-certs - ceph-client - ovn - neutron-metadata - libvirt - nova-custom-ceph {% if cifmw_hci_prepare_extra_services | length > 0 %} {% for svc in cifmw_hci_prepare_extra_services %} - {{ svc }} {% endfor %} {% endif %}] *** 2025-12-13 07:15:47,458 p=31853 u=zuul n=ansible | Saturday 13 December 2025 07:15:47 +0000 (0:00:00.226) 0:22:43.607 ***** 2025-12-13 07:15:47,458 p=31853 u=zuul n=ansible | Saturday 13 December 2025 07:15:47 +0000 (0:00:00.226) 0:22:43.606 ***** 2025-12-13 07:15:47,789 p=31853 u=zuul n=ansible | changed: [localhost] 2025-12-13 07:15:47,797 p=31853 u=zuul n=ansible | TASK [hci_prepare : Enabled nova discover_hosts after deployment cifmw_edpm_deploy_skip_nova_discover_hosts=False] *** 2025-12-13 07:15:47,797 p=31853 u=zuul n=ansible | Saturday 13 December 2025 07:15:47 +0000 (0:00:00.338) 0:22:43.946 ***** 2025-12-13 07:15:47,797 p=31853 u=zuul n=ansible | Saturday 13 December 2025 07:15:47 +0000 (0:00:00.338) 0:22:43.945 ***** 2025-12-13 07:15:47,817 p=31853 u=zuul n=ansible | ok: [localhost] 2025-12-13 07:15:47,824 p=31853 u=zuul n=ansible | TASK [hci_prepare : Save HCI info mode=0644, dest={{ cifmw_hci_prepare_basedir }}/artifacts/parameters/hci_prepare_phase2_params.yml, content={{ file_content | to_nice_yaml }}] *** 2025-12-13 07:15:47,824 p=31853 u=zuul n=ansible | Saturday 13 December 2025 07:15:47 +0000 (0:00:00.027) 0:22:43.974 ***** 2025-12-13 07:15:47,824 p=31853 u=zuul n=ansible | Saturday 13 December 2025 07:15:47 +0000 (0:00:00.027) 0:22:43.973 ***** 2025-12-13 07:15:48,155 p=31853 u=zuul n=ansible | changed: [localhost] 2025-12-13 07:15:48,167 p=31853 u=zuul n=ansible | TASK [Continue HCI deployment name=edpm_deploy] ******************************** 2025-12-13 07:15:48,167 p=31853 u=zuul n=ansible | Saturday 13 December 2025 07:15:48 +0000 (0:00:00.342) 0:22:44.316 ***** 2025-12-13 07:15:48,167 p=31853 u=zuul n=ansible | Saturday 13 December 2025 07:15:48 +0000 (0:00:00.342) 0:22:44.315 ***** 2025-12-13 07:15:48,221 p=31853 u=zuul n=ansible | TASK [edpm_deploy : Set EDPM related vars cifmw_edpm_deploy_env={{ cifmw_install_yamls_environment | combine({'PATH': cifmw_path}) | combine({'DATAPLANE_REGISTRY_URL': cifmw_edpm_deploy_registry_url }) | combine({'DATAPLANE_CONTAINER_TAG': cifmw_repo_setup_full_hash | default(cifmw_install_yamls_defaults['DATAPLANE_CONTAINER_TAG']) }) | combine(cifmw_edpm_deploy_extra_vars | default({})) | combine(_install_yamls_repos | default({})) }}, cacheable=True] *** 2025-12-13 07:15:48,221 p=31853 u=zuul n=ansible | Saturday 13 December 2025 07:15:48 +0000 (0:00:00.054) 0:22:44.371 ***** 2025-12-13 07:15:48,221 p=31853 u=zuul n=ansible | Saturday 13 December 2025 07:15:48 +0000 (0:00:00.054) 0:22:44.369 ***** 2025-12-13 07:15:48,250 p=31853 u=zuul n=ansible | ok: [localhost] 2025-12-13 07:15:48,256 p=31853 u=zuul n=ansible | TASK [edpm_deploy : Create the config file mode=0644, content={{ cifmw_edpm_deploy_nova_compute_extra_config }}, dest={{ _cifmw_edpm_deploy_nova_extra_config_file }}] *** 2025-12-13 07:15:48,257 p=31853 u=zuul n=ansible | Saturday 13 December 2025 07:15:48 +0000 (0:00:00.035) 0:22:44.406 ***** 2025-12-13 07:15:48,257 p=31853 u=zuul n=ansible | Saturday 13 December 2025 07:15:48 +0000 (0:00:00.035) 0:22:44.405 ***** 2025-12-13 07:15:48,592 p=31853 u=zuul n=ansible | ok: [localhost] 2025-12-13 07:15:48,598 p=31853 u=zuul n=ansible | TASK [edpm_deploy : Define DATAPLANE_EXTRA_NOVA_CONFIG_FILE cifmw_edpm_deploy_env={{ cifmw_edpm_deploy_env | default({}) | combine({'DATAPLANE_EXTRA_NOVA_CONFIG_FILE': _cifmw_edpm_deploy_nova_extra_config_file }) }}, cacheable=True] *** 2025-12-13 07:15:48,599 p=31853 u=zuul n=ansible | Saturday 13 December 2025 07:15:48 +0000 (0:00:00.342) 0:22:44.748 ***** 2025-12-13 07:15:48,599 p=31853 u=zuul n=ansible | Saturday 13 December 2025 07:15:48 +0000 (0:00:00.342) 0:22:44.747 ***** 2025-12-13 07:15:48,626 p=31853 u=zuul n=ansible | ok: [localhost] 2025-12-13 07:15:48,633 p=31853 u=zuul n=ansible | TASK [Prepare OpenStack Dataplane NodeSet CR name=install_yamls_makes, tasks_from=make_edpm_deploy_prep] *** 2025-12-13 07:15:48,633 p=31853 u=zuul n=ansible | Saturday 13 December 2025 07:15:48 +0000 (0:00:00.034) 0:22:44.783 ***** 2025-12-13 07:15:48,633 p=31853 u=zuul n=ansible | Saturday 13 December 2025 07:15:48 +0000 (0:00:00.034) 0:22:44.781 ***** 2025-12-13 07:15:48,651 p=31853 u=zuul n=ansible | skipping: [localhost] 2025-12-13 07:15:48,658 p=31853 u=zuul n=ansible | TASK [edpm_deploy : Perform kustomizations to the OpenStackDataPlaneNodeSet CR target_path={{ cifmw_edpm_deploy_openstack_crs_path }}, sort_ascending=False, kustomizations_paths={{ [ ( [ cifmw_edpm_deploy_manifests_dir, 'kustomizations', 'dataplane' ] | ansible.builtin.path_join ) ] }}] *** 2025-12-13 07:15:48,658 p=31853 u=zuul n=ansible | Saturday 13 December 2025 07:15:48 +0000 (0:00:00.024) 0:22:44.808 ***** 2025-12-13 07:15:48,658 p=31853 u=zuul n=ansible | Saturday 13 December 2025 07:15:48 +0000 (0:00:00.024) 0:22:44.806 ***** 2025-12-13 07:15:49,302 p=31853 u=zuul n=ansible | changed: [localhost] 2025-12-13 07:15:49,310 p=31853 u=zuul n=ansible | TASK [edpm_deploy : Log the CR that is about to be applied var=cifmw_edpm_deploy_crs_kustomize_result] *** 2025-12-13 07:15:49,310 p=31853 u=zuul n=ansible | Saturday 13 December 2025 07:15:49 +0000 (0:00:00.652) 0:22:45.460 ***** 2025-12-13 07:15:49,310 p=31853 u=zuul n=ansible | Saturday 13 December 2025 07:15:49 +0000 (0:00:00.652) 0:22:45.458 ***** 2025-12-13 07:15:49,340 p=31853 u=zuul n=ansible | ok: [localhost] => cifmw_edpm_deploy_crs_kustomize_result: changed: true count: 5 failed: false kustomizations_paths: - /home/zuul/ci-framework-data/artifacts/manifests/openstack/dataplane/cr/kustomization.yaml - /home/zuul/ci-framework-data/artifacts/manifests/kustomizations/dataplane/99-kustomization.yaml - /home/zuul/ci-framework-data/artifacts/manifests/kustomizations/dataplane/89-storage-mgmt-kustomization.yaml - /home/zuul/ci-framework-data/artifacts/manifests/kustomizations/dataplane/88-hci-pre-kustomization.yaml - /home/zuul/ci-framework-data/artifacts/manifests/kustomizations/dataplane/87-hci-post-kustomization.yaml output_path: /home/zuul/ci-framework-data/artifacts/manifests/openstack/dataplane/cr/cifmw-kustomization-result.yaml result: - apiVersion: v1 data: network_config_template: | --- {% set mtu_list = [ctlplane_mtu] %} {% for network in nodeset_networks %} {% set _ = mtu_list.append(lookup('vars', networks_lower[network] ~ '_mtu')) %} {%- endfor %} {% set min_viable_mtu = mtu_list | max %} network_config: - type: ovs_bridge name: {{ neutron_physical_bridge_name }} mtu: {{ min_viable_mtu }} use_dhcp: false dns_servers: {{ ctlplane_dns_nameservers }} domain: {{ dns_search_domains }} addresses: - ip_netmask: {{ ctlplane_ip }}/{{ ctlplane_cidr }} routes: {{ ctlplane_host_routes }} members: - type: interface name: nic1 mtu: {{ min_viable_mtu }} # force the MAC address of the bridge to this interface primary: true {% for network in nodeset_networks %} - type: vlan mtu: {{ lookup('vars', networks_lower[network] ~ '_mtu') }} vlan_id: {{ lookup('vars', networks_lower[network] ~ '_vlan_id') }} addresses: - ip_netmask: {{ lookup('vars', networks_lower[network] ~ '_ip') }}/{{ lookup('vars', networks_lower[network] ~ '_cidr') }} routes: {{ lookup('vars', networks_lower[network] ~ '_host_routes') }} {% endfor %} kind: ConfigMap metadata: labels: created-by: install_yamls name: network-config-template-ipam namespace: openstack - apiVersion: v1 data: physical_bridge_name: br-ex public_interface_name: eth0 kind: ConfigMap metadata: labels: created-by: install_yamls name: neutron-edpm-ipam namespace: openstack - apiVersion: v1 data: 25-nova-extra.conf: '' kind: ConfigMap metadata: labels: created-by: install_yamls name: nova-extra-config namespace: openstack - apiVersion: dataplane.openstack.org/v1beta1 kind: OpenStackDataPlaneDeployment metadata: labels: created-by: install_yamls name: edpm-deployment namespace: openstack spec: nodeSets: - openstack-edpm-ipam - apiVersion: dataplane.openstack.org/v1beta1 kind: OpenStackDataPlaneNodeSet metadata: labels: created-by: install_yamls name: openstack-edpm-ipam namespace: openstack spec: env: - name: ANSIBLE_VERBOSITY value: '2' networkAttachments: - ctlplane nodeTemplate: ansible: ansibleUser: zuul ansibleVars: ctlplane_dns_nameservers: - 192.168.122.10 - 1.1.1.1 edpm_container_registry_insecure_registries: - 38.129.56.153:5001 edpm_network_config_debug: true edpm_network_config_template: |- --- {% set mtu_list = [ctlplane_mtu] %} {% for network in nodeset_networks %} {% set _ = mtu_list.append(lookup('vars', networks_lower[network] ~ '_mtu')) %} {%- endfor %} {% set min_viable_mtu = mtu_list | max %} network_config: - type: interface name: nic1 use_dhcp: true mtu: {{ min_viable_mtu }} - type: ovs_bridge name: {{ neutron_physical_bridge_name }} mtu: {{ min_viable_mtu }} use_dhcp: false dns_servers: {{ ctlplane_dns_nameservers }} domain: {{ dns_search_domains }} addresses: - ip_netmask: {{ ctlplane_ip }}/{{ ctlplane_cidr }} routes: {{ ctlplane_host_routes }} members: - type: interface name: nic2 mtu: {{ min_viable_mtu }} # force the MAC address of the bridge to this interface primary: true {% if edpm_network_config_nmstate | bool %} # this ovs_extra configuration fixes OSPRH-17551, but it will be not needed when FDP-1472 is resolved ovs_extra: - "set interface eth1 external-ids:ovn-egress-iface=true" {% endif %} {% for network in nodeset_networks %} - type: vlan mtu: {{ lookup('vars', networks_lower[network] ~ '_mtu') }} vlan_id: {{ lookup('vars', networks_lower[network] ~ '_vlan_id') }} addresses: - ip_netmask: {{ lookup('vars', networks_lower[network] ~ '_ip') }}/{{ lookup('vars', networks_lower[network] ~ '_cidr') }} routes: {{ lookup('vars', networks_lower[network] ~ '_host_routes') }} {% endfor %} edpm_nodes_validation_validate_controllers_icmp: false edpm_nodes_validation_validate_gateway_icmp: false edpm_os_net_config_mappings: net_config_data_lookup: edpm-compute: nic2: eth1 edpm_sshd_allowed_ranges: - 0.0.0.0/0 enable_debug: false gather_facts: false image_prefix: openstack image_tag: c3923531bcda0b0811b2d5053f189beb neutron_public_interface_name: eth1 registry_url: quay.io/podified-antelope-centos9 timesync_ntp_servers: - hostname: pool.ntp.org ansibleVarsFrom: - configMapRef: name: network-config-template-ipam prefix: edpm_ - configMapRef: name: neutron-edpm-ipam prefix: neutron_ ansibleSSHPrivateKeySecret: dataplane-ansible-ssh-private-key-secret extraMounts: - extraVolType: Ceph mounts: - mountPath: /etc/ceph name: ceph readOnly: true volumes: - name: ceph secret: secretName: ceph-conf-files nodes: edpm-compute-0: ansible: ansibleHost: 192.168.122.100 hostName: compute-0 networks: - defaultRoute: false fixedIP: 192.168.122.100 name: ctlplane subnetName: subnet1 - name: internalapi subnetName: subnet1 - name: storage subnetName: subnet1 - name: tenant subnetName: subnet1 - name: StorageMgmt subnetName: subnet1 preProvisioned: true services: - repo-setup - bootstrap - configure-network - validate-network - install-os - ceph-hci-pre - configure-os - ssh-known-hosts - run-os - reboot-os - install-certs - ceph-client - ovn - neutron-metadata - libvirt - nova-custom-ceph tlsEnabled: true 2025-12-13 07:15:49,347 p=31853 u=zuul n=ansible | TASK [edpm_deploy : Apply dataplane resources but ignore DataPlaneDeployment kubeconfig={{ cifmw_openshift_kubeconfig }}, api_key={{ cifmw_openshift_token | default(omit) }}, context={{ cifmw_openshift_context | default(omit) }}, state=present, definition={{ lookup('file', cifmw_edpm_deploy_crs_kustomize_result.output_path) | from_yaml_all | rejectattr('kind', 'search', cifmw_edpm_deploy_step2_kind) }}] *** 2025-12-13 07:15:49,347 p=31853 u=zuul n=ansible | Saturday 13 December 2025 07:15:49 +0000 (0:00:00.037) 0:22:45.497 ***** 2025-12-13 07:15:49,347 p=31853 u=zuul n=ansible | Saturday 13 December 2025 07:15:49 +0000 (0:00:00.037) 0:22:45.495 ***** 2025-12-13 07:15:50,010 p=31853 u=zuul n=ansible | changed: [localhost] 2025-12-13 07:15:50,019 p=31853 u=zuul n=ansible | TASK [edpm_deploy : Wait for OpenStackDataPlaneNodeSet become SetupReady _raw_params=oc wait OpenStackDataPlaneNodeSet {{ cr_name }} --namespace={{ cifmw_install_yamls_defaults['NAMESPACE'] }} --for=condition=SetupReady --timeout={{ cifmw_edpm_deploy_timeout }}m] *** 2025-12-13 07:15:50,019 p=31853 u=zuul n=ansible | Saturday 13 December 2025 07:15:50 +0000 (0:00:00.671) 0:22:46.168 ***** 2025-12-13 07:15:50,019 p=31853 u=zuul n=ansible | Saturday 13 December 2025 07:15:50 +0000 (0:00:00.671) 0:22:46.167 ***** 2025-12-13 07:15:50,393 p=31853 u=zuul n=ansible | changed: [localhost] 2025-12-13 07:15:50,401 p=31853 u=zuul n=ansible | TASK [edpm_deploy : Apply DataPlaneDeployment resource kubeconfig={{ cifmw_openshift_kubeconfig }}, api_key={{ cifmw_openshift_token | default(omit) }}, context={{ cifmw_openshift_context | default(omit) }}, state=present, definition={{ lookup('file', cifmw_edpm_deploy_crs_kustomize_result.output_path) | from_yaml_all | selectattr('kind', 'search', cifmw_edpm_deploy_step2_kind) }}] *** 2025-12-13 07:15:50,401 p=31853 u=zuul n=ansible | Saturday 13 December 2025 07:15:50 +0000 (0:00:00.382) 0:22:46.551 ***** 2025-12-13 07:15:50,401 p=31853 u=zuul n=ansible | Saturday 13 December 2025 07:15:50 +0000 (0:00:00.382) 0:22:46.550 ***** 2025-12-13 07:15:51,023 p=31853 u=zuul n=ansible | changed: [localhost] 2025-12-13 07:15:51,030 p=31853 u=zuul n=ansible | TASK [edpm_deploy : Wait for OpenStackDataPlaneDeployment become Ready _raw_params=oc wait OpenStackDataPlaneDeployment {{ cr_name }} --namespace={{ cifmw_install_yamls_defaults['NAMESPACE'] }} --for=condition=Ready --timeout={{ cifmw_edpm_deploy_timeout }}m] *** 2025-12-13 07:15:51,030 p=31853 u=zuul n=ansible | Saturday 13 December 2025 07:15:51 +0000 (0:00:00.628) 0:22:47.179 ***** 2025-12-13 07:15:51,030 p=31853 u=zuul n=ansible | Saturday 13 December 2025 07:15:51 +0000 (0:00:00.628) 0:22:47.178 ***** 2025-12-13 07:29:21,108 p=31853 u=zuul n=ansible | changed: [localhost] 2025-12-13 07:29:21,114 p=31853 u=zuul n=ansible | TASK [edpm_deploy : Run nova-manage discover_hosts to ensure compute nodes are mapped output_dir={{ cifmw_basedir }}/artifacts, executable=/bin/bash, script=set -xe oc rsh --namespace={{ cifmw_install_yamls_defaults['NAMESPACE'] }} nova-cell0-conductor-0 nova-manage cell_v2 discover_hosts --verbose ] *** 2025-12-13 07:29:21,114 p=31853 u=zuul n=ansible | Saturday 13 December 2025 07:29:21 +0000 (0:13:30.084) 0:36:17.264 ***** 2025-12-13 07:29:21,115 p=31853 u=zuul n=ansible | Saturday 13 December 2025 07:29:21 +0000 (0:13:30.084) 0:36:17.263 ***** 2025-12-13 07:29:21,162 p=31853 u=zuul n=ansible | Follow script's output here: /home/zuul/ci-framework-data/logs/ci_script_020_run_nova_manage_discover.log 2025-12-13 07:29:23,328 p=31853 u=zuul n=ansible | changed: [localhost] 2025-12-13 07:29:23,336 p=31853 u=zuul n=ansible | TASK [Validate EDPM name=install_yamls_makes, tasks_from=make_edpm_deploy_instance] *** 2025-12-13 07:29:23,336 p=31853 u=zuul n=ansible | Saturday 13 December 2025 07:29:23 +0000 (0:00:02.221) 0:36:19.486 ***** 2025-12-13 07:29:23,336 p=31853 u=zuul n=ansible | Saturday 13 December 2025 07:29:23 +0000 (0:00:02.221) 0:36:19.484 ***** 2025-12-13 07:29:23,355 p=31853 u=zuul n=ansible | skipping: [localhost] 2025-12-13 07:29:23,368 p=31853 u=zuul n=ansible | TASK [run_hook : Assert parameters are valid quiet=True, that=['_list_hooks is not string', '_list_hooks is not mapping', '_list_hooks is iterable', '(hooks | default([])) is not string', '(hooks | default([])) is not mapping', '(hooks | default([])) is iterable']] *** 2025-12-13 07:29:23,368 p=31853 u=zuul n=ansible | Saturday 13 December 2025 07:29:23 +0000 (0:00:00.032) 0:36:19.518 ***** 2025-12-13 07:29:23,368 p=31853 u=zuul n=ansible | Saturday 13 December 2025 07:29:23 +0000 (0:00:00.032) 0:36:19.516 ***** 2025-12-13 07:29:23,417 p=31853 u=zuul n=ansible | ok: [localhost] 2025-12-13 07:29:23,424 p=31853 u=zuul n=ansible | TASK [run_hook : Assert single hooks are all mappings quiet=True, that=['_not_mapping_hooks | length == 0'], msg=All single hooks must be a list of mappings or a mapping.] *** 2025-12-13 07:29:23,424 p=31853 u=zuul n=ansible | Saturday 13 December 2025 07:29:23 +0000 (0:00:00.055) 0:36:19.574 ***** 2025-12-13 07:29:23,424 p=31853 u=zuul n=ansible | Saturday 13 December 2025 07:29:23 +0000 (0:00:00.055) 0:36:19.572 ***** 2025-12-13 07:29:23,496 p=31853 u=zuul n=ansible | ok: [localhost] 2025-12-13 07:29:23,503 p=31853 u=zuul n=ansible | TASK [run_hook : Loop on hooks for post_deploy _raw_params={{ hook.type }}.yml] *** 2025-12-13 07:29:23,503 p=31853 u=zuul n=ansible | Saturday 13 December 2025 07:29:23 +0000 (0:00:00.079) 0:36:19.653 ***** 2025-12-13 07:29:23,503 p=31853 u=zuul n=ansible | Saturday 13 December 2025 07:29:23 +0000 (0:00:00.079) 0:36:19.652 ***** 2025-12-13 07:29:23,600 p=31853 u=zuul n=ansible | included: /home/zuul/src/github.com/openstack-k8s-operators/ci-framework/roles/run_hook/tasks/playbook.yml for localhost => (item={'name': '81 Kustomize OpenStack CR with Ceph', 'type': 'playbook', 'source': 'control_plane_ceph_backends.yml'}) 2025-12-13 07:29:23,608 p=31853 u=zuul n=ansible | included: /home/zuul/src/github.com/openstack-k8s-operators/ci-framework/roles/run_hook/tasks/playbook.yml for localhost => (item={'name': '82 Kustomize and update Control Plane', 'type': 'playbook', 'source': 'control_plane_kustomize_deploy.yml'}) 2025-12-13 07:29:23,618 p=31853 u=zuul n=ansible | TASK [run_hook : Set playbook path for 81 Kustomize OpenStack CR with Ceph cifmw_basedir={{ _bdir }}, hook_name={{ _hook_name }}, playbook_path={{ _play | realpath }}, log_path={{ _bdir }}/logs/{{ step }}_{{ _hook_name }}.log, extra_vars=-e operator_namespace={{ _operator_namespace }} -e namespace={{ _namespace}} {%- if hook.extra_vars is defined and hook.extra_vars|length > 0 -%} {% for key,value in hook.extra_vars.items() -%} {%- if key == 'file' %} -e "@{{ value }}" {%- else %} -e "{{ key }}={{ value }}" {%- endif %} {%- endfor %} {%- endif %}] *** 2025-12-13 07:29:23,618 p=31853 u=zuul n=ansible | Saturday 13 December 2025 07:29:23 +0000 (0:00:00.115) 0:36:19.768 ***** 2025-12-13 07:29:23,619 p=31853 u=zuul n=ansible | Saturday 13 December 2025 07:29:23 +0000 (0:00:00.115) 0:36:19.767 ***** 2025-12-13 07:29:23,657 p=31853 u=zuul n=ansible | ok: [localhost] 2025-12-13 07:29:23,665 p=31853 u=zuul n=ansible | TASK [run_hook : Get file stat path={{ playbook_path }}] *********************** 2025-12-13 07:29:23,665 p=31853 u=zuul n=ansible | Saturday 13 December 2025 07:29:23 +0000 (0:00:00.046) 0:36:19.814 ***** 2025-12-13 07:29:23,665 p=31853 u=zuul n=ansible | Saturday 13 December 2025 07:29:23 +0000 (0:00:00.046) 0:36:19.813 ***** 2025-12-13 07:29:23,831 p=31853 u=zuul n=ansible | ok: [localhost] 2025-12-13 07:29:23,839 p=31853 u=zuul n=ansible | TASK [run_hook : Fail if playbook doesn't exist msg=Playbook {{ playbook_path }} doesn't seem to exist.] *** 2025-12-13 07:29:23,839 p=31853 u=zuul n=ansible | Saturday 13 December 2025 07:29:23 +0000 (0:00:00.174) 0:36:19.989 ***** 2025-12-13 07:29:23,839 p=31853 u=zuul n=ansible | Saturday 13 December 2025 07:29:23 +0000 (0:00:00.174) 0:36:19.988 ***** 2025-12-13 07:29:23,856 p=31853 u=zuul n=ansible | skipping: [localhost] 2025-12-13 07:29:23,863 p=31853 u=zuul n=ansible | TASK [run_hook : Get parameters files paths={{ (cifmw_basedir, 'artifacts/parameters') | path_join }}, file_type=file, patterns=*.yml] *** 2025-12-13 07:29:23,863 p=31853 u=zuul n=ansible | Saturday 13 December 2025 07:29:23 +0000 (0:00:00.023) 0:36:20.013 ***** 2025-12-13 07:29:23,863 p=31853 u=zuul n=ansible | Saturday 13 December 2025 07:29:23 +0000 (0:00:00.023) 0:36:20.011 ***** 2025-12-13 07:29:24,022 p=31853 u=zuul n=ansible | ok: [localhost] 2025-12-13 07:29:24,030 p=31853 u=zuul n=ansible | TASK [run_hook : Add parameters artifacts as extra variables extra_vars={{ extra_vars }} {% for file in cifmw_run_hook_parameters_files.files %} -e "@{{ file.path }}" {%- endfor %}] *** 2025-12-13 07:29:24,030 p=31853 u=zuul n=ansible | Saturday 13 December 2025 07:29:24 +0000 (0:00:00.166) 0:36:20.180 ***** 2025-12-13 07:29:24,030 p=31853 u=zuul n=ansible | Saturday 13 December 2025 07:29:24 +0000 (0:00:00.166) 0:36:20.178 ***** 2025-12-13 07:29:24,050 p=31853 u=zuul n=ansible | ok: [localhost] 2025-12-13 07:29:24,058 p=31853 u=zuul n=ansible | TASK [run_hook : Ensure log directory exists path={{ log_path | dirname }}, state=directory, mode=0755] *** 2025-12-13 07:29:24,058 p=31853 u=zuul n=ansible | Saturday 13 December 2025 07:29:24 +0000 (0:00:00.027) 0:36:20.208 ***** 2025-12-13 07:29:24,058 p=31853 u=zuul n=ansible | Saturday 13 December 2025 07:29:24 +0000 (0:00:00.027) 0:36:20.206 ***** 2025-12-13 07:29:24,216 p=31853 u=zuul n=ansible | ok: [localhost] 2025-12-13 07:29:24,224 p=31853 u=zuul n=ansible | TASK [run_hook : Ensure artifacts directory exists path={{ cifmw_basedir }}/artifacts, state=directory, mode=0755] *** 2025-12-13 07:29:24,224 p=31853 u=zuul n=ansible | Saturday 13 December 2025 07:29:24 +0000 (0:00:00.165) 0:36:20.373 ***** 2025-12-13 07:29:24,224 p=31853 u=zuul n=ansible | Saturday 13 December 2025 07:29:24 +0000 (0:00:00.165) 0:36:20.372 ***** 2025-12-13 07:29:24,396 p=31853 u=zuul n=ansible | ok: [localhost] 2025-12-13 07:29:24,405 p=31853 u=zuul n=ansible | TASK [run_hook : Run hook without retry - 81 Kustomize OpenStack CR with Ceph] *** 2025-12-13 07:29:24,405 p=31853 u=zuul n=ansible | Saturday 13 December 2025 07:29:24 +0000 (0:00:00.181) 0:36:20.554 ***** 2025-12-13 07:29:24,405 p=31853 u=zuul n=ansible | Saturday 13 December 2025 07:29:24 +0000 (0:00:00.181) 0:36:20.553 ***** 2025-12-13 07:29:24,452 p=31853 u=zuul n=ansible | Follow script's output here: /home/zuul/ci-framework-data/logs/ci_script_021_run_hook_without_retry_81.log 2025-12-13 07:29:25,938 p=31853 u=zuul n=ansible | changed: [localhost] 2025-12-13 07:29:25,946 p=31853 u=zuul n=ansible | TASK [run_hook : Run hook with retry - 81 Kustomize OpenStack CR with Ceph] **** 2025-12-13 07:29:25,946 p=31853 u=zuul n=ansible | Saturday 13 December 2025 07:29:25 +0000 (0:00:01.541) 0:36:22.096 ***** 2025-12-13 07:29:25,946 p=31853 u=zuul n=ansible | Saturday 13 December 2025 07:29:25 +0000 (0:00:01.541) 0:36:22.094 ***** 2025-12-13 07:29:25,965 p=31853 u=zuul n=ansible | skipping: [localhost] 2025-12-13 07:29:25,973 p=31853 u=zuul n=ansible | TASK [run_hook : Check if we have a file path={{ cifmw_basedir }}/artifacts/{{ step }}_{{ hook_name }}.yml] *** 2025-12-13 07:29:25,973 p=31853 u=zuul n=ansible | Saturday 13 December 2025 07:29:25 +0000 (0:00:00.027) 0:36:22.123 ***** 2025-12-13 07:29:25,973 p=31853 u=zuul n=ansible | Saturday 13 December 2025 07:29:25 +0000 (0:00:00.027) 0:36:22.122 ***** 2025-12-13 07:29:26,122 p=31853 u=zuul n=ansible | ok: [localhost] 2025-12-13 07:29:26,130 p=31853 u=zuul n=ansible | TASK [run_hook : Load generated content in main playbook file={{ cifmw_basedir }}/artifacts/{{ step }}_{{ hook_name }}.yml] *** 2025-12-13 07:29:26,130 p=31853 u=zuul n=ansible | Saturday 13 December 2025 07:29:26 +0000 (0:00:00.156) 0:36:22.279 ***** 2025-12-13 07:29:26,130 p=31853 u=zuul n=ansible | Saturday 13 December 2025 07:29:26 +0000 (0:00:00.156) 0:36:22.278 ***** 2025-12-13 07:29:26,146 p=31853 u=zuul n=ansible | skipping: [localhost] 2025-12-13 07:29:26,155 p=31853 u=zuul n=ansible | TASK [run_hook : Set playbook path for 82 Kustomize and update Control Plane cifmw_basedir={{ _bdir }}, hook_name={{ _hook_name }}, playbook_path={{ _play | realpath }}, log_path={{ _bdir }}/logs/{{ step }}_{{ _hook_name }}.log, extra_vars=-e operator_namespace={{ _operator_namespace }} -e namespace={{ _namespace}} {%- if hook.extra_vars is defined and hook.extra_vars|length > 0 -%} {% for key,value in hook.extra_vars.items() -%} {%- if key == 'file' %} -e "@{{ value }}" {%- else %} -e "{{ key }}={{ value }}" {%- endif %} {%- endfor %} {%- endif %}] *** 2025-12-13 07:29:26,156 p=31853 u=zuul n=ansible | Saturday 13 December 2025 07:29:26 +0000 (0:00:00.025) 0:36:22.305 ***** 2025-12-13 07:29:26,156 p=31853 u=zuul n=ansible | Saturday 13 December 2025 07:29:26 +0000 (0:00:00.025) 0:36:22.304 ***** 2025-12-13 07:29:26,195 p=31853 u=zuul n=ansible | ok: [localhost] 2025-12-13 07:29:26,203 p=31853 u=zuul n=ansible | TASK [run_hook : Get file stat path={{ playbook_path }}] *********************** 2025-12-13 07:29:26,203 p=31853 u=zuul n=ansible | Saturday 13 December 2025 07:29:26 +0000 (0:00:00.047) 0:36:22.353 ***** 2025-12-13 07:29:26,203 p=31853 u=zuul n=ansible | Saturday 13 December 2025 07:29:26 +0000 (0:00:00.047) 0:36:22.351 ***** 2025-12-13 07:29:26,361 p=31853 u=zuul n=ansible | ok: [localhost] 2025-12-13 07:29:26,369 p=31853 u=zuul n=ansible | TASK [run_hook : Fail if playbook doesn't exist msg=Playbook {{ playbook_path }} doesn't seem to exist.] *** 2025-12-13 07:29:26,369 p=31853 u=zuul n=ansible | Saturday 13 December 2025 07:29:26 +0000 (0:00:00.166) 0:36:22.519 ***** 2025-12-13 07:29:26,369 p=31853 u=zuul n=ansible | Saturday 13 December 2025 07:29:26 +0000 (0:00:00.166) 0:36:22.518 ***** 2025-12-13 07:29:26,384 p=31853 u=zuul n=ansible | skipping: [localhost] 2025-12-13 07:29:26,391 p=31853 u=zuul n=ansible | TASK [run_hook : Get parameters files paths={{ (cifmw_basedir, 'artifacts/parameters') | path_join }}, file_type=file, patterns=*.yml] *** 2025-12-13 07:29:26,391 p=31853 u=zuul n=ansible | Saturday 13 December 2025 07:29:26 +0000 (0:00:00.022) 0:36:22.541 ***** 2025-12-13 07:29:26,391 p=31853 u=zuul n=ansible | Saturday 13 December 2025 07:29:26 +0000 (0:00:00.022) 0:36:22.540 ***** 2025-12-13 07:29:26,545 p=31853 u=zuul n=ansible | ok: [localhost] 2025-12-13 07:29:26,552 p=31853 u=zuul n=ansible | TASK [run_hook : Add parameters artifacts as extra variables extra_vars={{ extra_vars }} {% for file in cifmw_run_hook_parameters_files.files %} -e "@{{ file.path }}" {%- endfor %}] *** 2025-12-13 07:29:26,552 p=31853 u=zuul n=ansible | Saturday 13 December 2025 07:29:26 +0000 (0:00:00.160) 0:36:22.702 ***** 2025-12-13 07:29:26,552 p=31853 u=zuul n=ansible | Saturday 13 December 2025 07:29:26 +0000 (0:00:00.160) 0:36:22.700 ***** 2025-12-13 07:29:26,570 p=31853 u=zuul n=ansible | ok: [localhost] 2025-12-13 07:29:26,577 p=31853 u=zuul n=ansible | TASK [run_hook : Ensure log directory exists path={{ log_path | dirname }}, state=directory, mode=0755] *** 2025-12-13 07:29:26,577 p=31853 u=zuul n=ansible | Saturday 13 December 2025 07:29:26 +0000 (0:00:00.025) 0:36:22.727 ***** 2025-12-13 07:29:26,577 p=31853 u=zuul n=ansible | Saturday 13 December 2025 07:29:26 +0000 (0:00:00.025) 0:36:22.725 ***** 2025-12-13 07:29:26,729 p=31853 u=zuul n=ansible | ok: [localhost] 2025-12-13 07:29:26,736 p=31853 u=zuul n=ansible | TASK [run_hook : Ensure artifacts directory exists path={{ cifmw_basedir }}/artifacts, state=directory, mode=0755] *** 2025-12-13 07:29:26,736 p=31853 u=zuul n=ansible | Saturday 13 December 2025 07:29:26 +0000 (0:00:00.158) 0:36:22.886 ***** 2025-12-13 07:29:26,736 p=31853 u=zuul n=ansible | Saturday 13 December 2025 07:29:26 +0000 (0:00:00.159) 0:36:22.884 ***** 2025-12-13 07:29:26,889 p=31853 u=zuul n=ansible | ok: [localhost] 2025-12-13 07:29:26,897 p=31853 u=zuul n=ansible | TASK [run_hook : Run hook without retry - 82 Kustomize and update Control Plane] *** 2025-12-13 07:29:26,897 p=31853 u=zuul n=ansible | Saturday 13 December 2025 07:29:26 +0000 (0:00:00.161) 0:36:23.047 ***** 2025-12-13 07:29:26,897 p=31853 u=zuul n=ansible | Saturday 13 December 2025 07:29:26 +0000 (0:00:00.161) 0:36:23.046 ***** 2025-12-13 07:29:26,939 p=31853 u=zuul n=ansible | Follow script's output here: /home/zuul/ci-framework-data/logs/ci_script_022_run_hook_without_retry_82.log 2025-12-13 07:31:08,856 p=31853 u=zuul n=ansible | changed: [localhost] 2025-12-13 07:31:08,863 p=31853 u=zuul n=ansible | TASK [run_hook : Run hook with retry - 82 Kustomize and update Control Plane] *** 2025-12-13 07:31:08,863 p=31853 u=zuul n=ansible | Saturday 13 December 2025 07:31:08 +0000 (0:01:41.965) 0:38:05.012 ***** 2025-12-13 07:31:08,863 p=31853 u=zuul n=ansible | Saturday 13 December 2025 07:31:08 +0000 (0:01:41.965) 0:38:05.011 ***** 2025-12-13 07:31:08,880 p=31853 u=zuul n=ansible | skipping: [localhost] 2025-12-13 07:31:08,888 p=31853 u=zuul n=ansible | TASK [run_hook : Check if we have a file path={{ cifmw_basedir }}/artifacts/{{ step }}_{{ hook_name }}.yml] *** 2025-12-13 07:31:08,888 p=31853 u=zuul n=ansible | Saturday 13 December 2025 07:31:08 +0000 (0:00:00.025) 0:38:05.038 ***** 2025-12-13 07:31:08,888 p=31853 u=zuul n=ansible | Saturday 13 December 2025 07:31:08 +0000 (0:00:00.025) 0:38:05.037 ***** 2025-12-13 07:31:09,037 p=31853 u=zuul n=ansible | ok: [localhost] 2025-12-13 07:31:09,045 p=31853 u=zuul n=ansible | TASK [run_hook : Load generated content in main playbook file={{ cifmw_basedir }}/artifacts/{{ step }}_{{ hook_name }}.yml] *** 2025-12-13 07:31:09,045 p=31853 u=zuul n=ansible | Saturday 13 December 2025 07:31:09 +0000 (0:00:00.156) 0:38:05.195 ***** 2025-12-13 07:31:09,045 p=31853 u=zuul n=ansible | Saturday 13 December 2025 07:31:09 +0000 (0:00:00.156) 0:38:05.194 ***** 2025-12-13 07:31:09,061 p=31853 u=zuul n=ansible | skipping: [localhost] 2025-12-13 07:31:09,074 p=31853 u=zuul n=ansible | TASK [Run validations name=validations] **************************************** 2025-12-13 07:31:09,074 p=31853 u=zuul n=ansible | Saturday 13 December 2025 07:31:09 +0000 (0:00:00.028) 0:38:05.223 ***** 2025-12-13 07:31:09,074 p=31853 u=zuul n=ansible | Saturday 13 December 2025 07:31:09 +0000 (0:00:00.028) 0:38:05.222 ***** 2025-12-13 07:31:09,090 p=31853 u=zuul n=ansible | skipping: [localhost] 2025-12-13 07:31:09,103 p=31853 u=zuul n=ansible | TASK [run_hook : Assert parameters are valid quiet=True, that=['_list_hooks is not string', '_list_hooks is not mapping', '_list_hooks is iterable', '(hooks | default([])) is not string', '(hooks | default([])) is not mapping', '(hooks | default([])) is iterable']] *** 2025-12-13 07:31:09,103 p=31853 u=zuul n=ansible | Saturday 13 December 2025 07:31:09 +0000 (0:00:00.029) 0:38:05.252 ***** 2025-12-13 07:31:09,103 p=31853 u=zuul n=ansible | Saturday 13 December 2025 07:31:09 +0000 (0:00:00.029) 0:38:05.251 ***** 2025-12-13 07:31:09,116 p=31853 u=zuul n=ansible | skipping: [localhost] 2025-12-13 07:31:09,125 p=31853 u=zuul n=ansible | TASK [run_hook : Assert single hooks are all mappings quiet=True, that=['_not_mapping_hooks | length == 0'], msg=All single hooks must be a list of mappings or a mapping.] *** 2025-12-13 07:31:09,125 p=31853 u=zuul n=ansible | Saturday 13 December 2025 07:31:09 +0000 (0:00:00.022) 0:38:05.275 ***** 2025-12-13 07:31:09,125 p=31853 u=zuul n=ansible | Saturday 13 December 2025 07:31:09 +0000 (0:00:00.022) 0:38:05.273 ***** 2025-12-13 07:31:09,139 p=31853 u=zuul n=ansible | skipping: [localhost] 2025-12-13 07:31:09,147 p=31853 u=zuul n=ansible | TASK [run_hook : Loop on hooks for pre_deploy _raw_params={{ hook.type }}.yml] *** 2025-12-13 07:31:09,147 p=31853 u=zuul n=ansible | Saturday 13 December 2025 07:31:09 +0000 (0:00:00.022) 0:38:05.297 ***** 2025-12-13 07:31:09,147 p=31853 u=zuul n=ansible | Saturday 13 December 2025 07:31:09 +0000 (0:00:00.022) 0:38:05.296 ***** 2025-12-13 07:31:09,219 p=31853 u=zuul n=ansible | skipping: [localhost] => (item={'name': '61 HCI pre deploy kustomizations', 'source': 'control_plane_hci_pre_deploy.yml', 'type': 'playbook'}) 2025-12-13 07:31:09,222 p=31853 u=zuul n=ansible | skipping: [localhost] => (item={'name': '80 Kustomize OpenStack CR', 'source': 'control_plane_horizon.yml', 'type': 'playbook'}) 2025-12-13 07:31:09,223 p=31853 u=zuul n=ansible | skipping: [localhost] 2025-12-13 07:31:09,234 p=31853 u=zuul n=ansible | TASK [Early end if not architecture deploy _raw_params=end_play] *************** 2025-12-13 07:31:09,234 p=31853 u=zuul n=ansible | Saturday 13 December 2025 07:31:09 +0000 (0:00:00.087) 0:38:05.384 ***** 2025-12-13 07:31:09,234 p=31853 u=zuul n=ansible | Saturday 13 December 2025 07:31:09 +0000 (0:00:00.087) 0:38:05.383 ***** 2025-12-13 07:31:09,243 p=31853 u=zuul n=ansible | PLAY RECAP ********************************************************************* 2025-12-13 07:31:09,243 p=31853 u=zuul n=ansible | compute-0 : ok=0 changed=0 unreachable=0 failed=0 skipped=14 rescued=0 ignored=0 2025-12-13 07:31:09,243 p=31853 u=zuul n=ansible | localhost : ok=278 changed=93 unreachable=0 failed=0 skipped=163 rescued=0 ignored=1 2025-12-13 07:31:09,243 p=31853 u=zuul n=ansible | Saturday 13 December 2025 07:31:09 +0000 (0:00:00.008) 0:38:05.393 ***** 2025-12-13 07:31:09,243 p=31853 u=zuul n=ansible | =============================================================================== 2025-12-13 07:31:09,244 p=31853 u=zuul n=ansible | edpm_deploy : Wait for OpenStackDataPlaneDeployment become Ready ------ 810.08s 2025-12-13 07:31:09,244 p=31853 u=zuul n=ansible | edpm_deploy : Wait for OpenStackDataPlaneDeployment become Ready ------ 444.66s 2025-12-13 07:31:09,244 p=31853 u=zuul n=ansible | edpm_prepare : Wait for OpenStack controlplane to be deployed --------- 246.31s 2025-12-13 07:31:09,244 p=31853 u=zuul n=ansible | run_hook : Run hook without retry - 80 Run Ceph hook playbook --------- 197.18s 2025-12-13 07:31:09,244 p=31853 u=zuul n=ansible | install_yamls_makes : Run openstack ----------------------------------- 114.85s 2025-12-13 07:31:09,244 p=31853 u=zuul n=ansible | run_hook : Run hook without retry - 82 Kustomize and update Control Plane - 101.97s 2025-12-13 07:31:09,244 p=31853 u=zuul n=ansible | install_yamls_makes : Run openstack_init ------------------------------- 62.86s 2025-12-13 07:31:09,244 p=31853 u=zuul n=ansible | edpm_prepare : Wait for OpenStack subscription creation ---------------- 60.82s 2025-12-13 07:31:09,244 p=31853 u=zuul n=ansible | run_hook : Run hook without retry - Download needed tools -------------- 31.33s 2025-12-13 07:31:09,244 p=31853 u=zuul n=ansible | repo_setup : Check for gating.repo file on content provider ------------ 30.47s 2025-12-13 07:31:09,245 p=31853 u=zuul n=ansible | edpm_prepare : Wait for control plane to change its status ------------- 30.03s 2025-12-13 07:31:09,245 p=31853 u=zuul n=ansible | ci_setup : Install needed packages ------------------------------------- 25.72s 2025-12-13 07:31:09,245 p=31853 u=zuul n=ansible | cert_manager : Wait for cert-manager pods to be ready ------------------ 11.84s 2025-12-13 07:31:09,245 p=31853 u=zuul n=ansible | repo_setup : Initialize python venv and install requirements ------------ 8.24s 2025-12-13 07:31:09,245 p=31853 u=zuul n=ansible | run_hook : Run hook without retry - Fetch nodes facts and save them as parameters --- 7.81s 2025-12-13 07:31:09,245 p=31853 u=zuul n=ansible | install_yamls_makes : Run edpm_deploy_prep ------------------------------ 6.81s 2025-12-13 07:31:09,245 p=31853 u=zuul n=ansible | ci_setup : Install openshift client ------------------------------------- 6.48s 2025-12-13 07:31:09,245 p=31853 u=zuul n=ansible | ci_local_storage : Perform action in the PV directory ------------------- 4.05s 2025-12-13 07:31:09,245 p=31853 u=zuul n=ansible | install_yamls_makes : Run netconfig_deploy ------------------------------ 3.53s 2025-12-13 07:31:09,245 p=31853 u=zuul n=ansible | repo_setup : Cleanup existing metadata ---------------------------------- 3.37s 2025-12-13 07:31:09,245 p=31853 u=zuul n=ansible | Saturday 13 December 2025 07:31:09 +0000 (0:00:00.010) 0:38:05.393 ***** 2025-12-13 07:31:09,245 p=31853 u=zuul n=ansible | =============================================================================== 2025-12-13 07:31:09,245 p=31853 u=zuul n=ansible | edpm_deploy ---------------------------------------------------------- 1263.02s 2025-12-13 07:31:09,245 p=31853 u=zuul n=ansible | run_hook -------------------------------------------------------------- 356.15s 2025-12-13 07:31:09,245 p=31853 u=zuul n=ansible | edpm_prepare ---------------------------------------------------------- 340.92s 2025-12-13 07:31:09,245 p=31853 u=zuul n=ansible | install_yamls_makes --------------------------------------------------- 190.66s 2025-12-13 07:31:09,245 p=31853 u=zuul n=ansible | repo_setup ------------------------------------------------------------- 48.85s 2025-12-13 07:31:09,245 p=31853 u=zuul n=ansible | ci_setup --------------------------------------------------------------- 34.12s 2025-12-13 07:31:09,245 p=31853 u=zuul n=ansible | cert_manager ----------------------------------------------------------- 17.80s 2025-12-13 07:31:09,245 p=31853 u=zuul n=ansible | ci_local_storage -------------------------------------------------------- 8.86s 2025-12-13 07:31:09,245 p=31853 u=zuul n=ansible | openshift_setup --------------------------------------------------------- 4.13s 2025-12-13 07:31:09,245 p=31853 u=zuul n=ansible | hci_prepare ------------------------------------------------------------- 4.00s 2025-12-13 07:31:09,245 p=31853 u=zuul n=ansible | install_ca -------------------------------------------------------------- 3.42s 2025-12-13 07:31:09,245 p=31853 u=zuul n=ansible | openshift_login --------------------------------------------------------- 3.13s 2025-12-13 07:31:09,245 p=31853 u=zuul n=ansible | install_yamls ----------------------------------------------------------- 2.66s 2025-12-13 07:31:09,245 p=31853 u=zuul n=ansible | cifmw_setup ------------------------------------------------------------- 2.14s 2025-12-13 07:31:09,245 p=31853 u=zuul n=ansible | discover_latest_image --------------------------------------------------- 1.39s 2025-12-13 07:31:09,245 p=31853 u=zuul n=ansible | install_openstack_ca ---------------------------------------------------- 0.91s 2025-12-13 07:31:09,245 p=31853 u=zuul n=ansible | gather_facts ------------------------------------------------------------ 0.89s 2025-12-13 07:31:09,245 p=31853 u=zuul n=ansible | edpm_deploy_baremetal --------------------------------------------------- 0.60s 2025-12-13 07:31:09,245 p=31853 u=zuul n=ansible | cifmw_nfs --------------------------------------------------------------- 0.43s 2025-12-13 07:31:09,245 p=31853 u=zuul n=ansible | operator_build ---------------------------------------------------------- 0.30s 2025-12-13 07:31:09,245 p=31853 u=zuul n=ansible | networking_mapper ------------------------------------------------------- 0.28s 2025-12-13 07:31:09,245 p=31853 u=zuul n=ansible | libvirt_manager --------------------------------------------------------- 0.27s 2025-12-13 07:31:09,245 p=31853 u=zuul n=ansible | ansible.builtin.file ---------------------------------------------------- 0.23s 2025-12-13 07:31:09,245 p=31853 u=zuul n=ansible | ansible.builtin.meta ---------------------------------------------------- 0.06s 2025-12-13 07:31:09,246 p=31853 u=zuul n=ansible | ansible.builtin.include_tasks ------------------------------------------- 0.06s 2025-12-13 07:31:09,246 p=31853 u=zuul n=ansible | pkg_build --------------------------------------------------------------- 0.05s 2025-12-13 07:31:09,246 p=31853 u=zuul n=ansible | ansible.builtin.include_vars -------------------------------------------- 0.02s 2025-12-13 07:31:09,246 p=31853 u=zuul n=ansible | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2025-12-13 07:31:09,246 p=31853 u=zuul n=ansible | total ---------------------------------------------------------------- 2285.36s 2025-12-13 07:31:25,594 p=38280 u=zuul n=ansible | PLAY [Run Post-deployment admin setup steps, test, and compliance scan] ******** 2025-12-13 07:31:25,627 p=38280 u=zuul n=ansible | TASK [run_hook : Assert parameters are valid quiet=True, that=['_list_hooks is not string', '_list_hooks is not mapping', '_list_hooks is iterable', '(hooks | default([])) is not string', '(hooks | default([])) is not mapping', '(hooks | default([])) is iterable']] *** 2025-12-13 07:31:25,627 p=38280 u=zuul n=ansible | Saturday 13 December 2025 07:31:25 +0000 (0:00:00.037) 0:00:00.037 ***** 2025-12-13 07:31:25,627 p=38280 u=zuul n=ansible | Saturday 13 December 2025 07:31:25 +0000 (0:00:00.036) 0:00:00.036 ***** 2025-12-13 07:31:25,675 p=38280 u=zuul n=ansible | ok: [localhost] 2025-12-13 07:31:25,684 p=38280 u=zuul n=ansible | TASK [run_hook : Assert single hooks are all mappings quiet=True, that=['_not_mapping_hooks | length == 0'], msg=All single hooks must be a list of mappings or a mapping.] *** 2025-12-13 07:31:25,684 p=38280 u=zuul n=ansible | Saturday 13 December 2025 07:31:25 +0000 (0:00:00.056) 0:00:00.093 ***** 2025-12-13 07:31:25,684 p=38280 u=zuul n=ansible | Saturday 13 December 2025 07:31:25 +0000 (0:00:00.056) 0:00:00.093 ***** 2025-12-13 07:31:25,748 p=38280 u=zuul n=ansible | ok: [localhost] 2025-12-13 07:31:25,757 p=38280 u=zuul n=ansible | TASK [run_hook : Loop on hooks for pre_admin_setup _raw_params={{ hook.type }}.yml] *** 2025-12-13 07:31:25,757 p=38280 u=zuul n=ansible | Saturday 13 December 2025 07:31:25 +0000 (0:00:00.072) 0:00:00.166 ***** 2025-12-13 07:31:25,757 p=38280 u=zuul n=ansible | Saturday 13 December 2025 07:31:25 +0000 (0:00:00.072) 0:00:00.166 ***** 2025-12-13 07:31:25,822 p=38280 u=zuul n=ansible | skipping: [localhost] 2025-12-13 07:31:25,838 p=38280 u=zuul n=ansible | TASK [cifmw_setup : Load parameters files dir={{ cifmw_basedir }}/artifacts/parameters] *** 2025-12-13 07:31:25,838 p=38280 u=zuul n=ansible | Saturday 13 December 2025 07:31:25 +0000 (0:00:00.081) 0:00:00.248 ***** 2025-12-13 07:31:25,838 p=38280 u=zuul n=ansible | Saturday 13 December 2025 07:31:25 +0000 (0:00:00.081) 0:00:00.247 ***** 2025-12-13 07:31:25,875 p=38280 u=zuul n=ansible | ok: [localhost] 2025-12-13 07:31:25,886 p=38280 u=zuul n=ansible | TASK [os_net_setup : Delete existing subnets _raw_params=set -euxo pipefail if [ $(oc exec -n {{ cifmw_os_net_setup_namespace }} openstackclient -- openstack subnet list --network {{ item.0.name }} -c Name -f value | grep -c {{ item.1.name }}) != 0 ];then oc exec -n {{ cifmw_os_net_setup_namespace }} openstackclient -- openstack subnet delete {{ item.1.name }} fi ] *** 2025-12-13 07:31:25,886 p=38280 u=zuul n=ansible | Saturday 13 December 2025 07:31:25 +0000 (0:00:00.047) 0:00:00.295 ***** 2025-12-13 07:31:25,886 p=38280 u=zuul n=ansible | Saturday 13 December 2025 07:31:25 +0000 (0:00:00.047) 0:00:00.295 ***** 2025-12-13 07:31:28,920 p=38280 u=zuul n=ansible | changed: [localhost] => (item=[{'name': 'public', 'external': True, 'shared': False, 'is_default': True, 'provider_network_type': 'flat', 'provider_physical_network': 'datacentre', 'availability_zone_hints': [], 'subnets': [{'name': 'public_subnet', 'cidr': '192.168.122.0/24', 'allocation_pool_start': '192.168.122.171', 'allocation_pool_end': '192.168.122.250', 'gateway_ip': '192.168.122.1', 'enable_dhcp': True}]}, {'name': 'public_subnet', 'cidr': '192.168.122.0/24', 'allocation_pool_start': '192.168.122.171', 'allocation_pool_end': '192.168.122.250', 'gateway_ip': '192.168.122.1', 'enable_dhcp': True}]) 2025-12-13 07:31:28,929 p=38280 u=zuul n=ansible | TASK [os_net_setup : Delete existing subnet pools _raw_params=set -euxo pipefail if [ $(oc exec -n {{ cifmw_os_net_setup_namespace }} openstackclient -- openstack subnet pool list -c Name -f value | grep -c {{ item.name }}) != 0 ];then oc exec -n {{ cifmw_os_net_setup_namespace }} openstackclient -- openstack subnet pool delete {{ item.name }} fi ] *** 2025-12-13 07:31:28,929 p=38280 u=zuul n=ansible | Saturday 13 December 2025 07:31:28 +0000 (0:00:03.042) 0:00:03.338 ***** 2025-12-13 07:31:28,929 p=38280 u=zuul n=ansible | Saturday 13 December 2025 07:31:28 +0000 (0:00:03.042) 0:00:03.338 ***** 2025-12-13 07:31:30,951 p=38280 u=zuul n=ansible | changed: [localhost] => (item={'name': 'shared-pool-ipv4', 'default_prefix_length': 26, 'prefixes': '10.1.0.0/20', 'is_default': True, 'is_shared': True}) 2025-12-13 07:31:32,940 p=38280 u=zuul n=ansible | changed: [localhost] => (item={'name': 'shared-pool-ipv6', 'default_prefix_length': 64, 'prefixes': 'fdfe:381f:8400::/56', 'is_default': True, 'is_shared': True}) 2025-12-13 07:31:32,950 p=38280 u=zuul n=ansible | TASK [os_net_setup : Delete existing networks _raw_params=set -euxo pipefail if [ $(oc exec -n {{ cifmw_os_net_setup_namespace }} openstackclient -- openstack network list -c Name -f value | grep -c {{ item.name }}) != 0 ];then oc exec -n {{ cifmw_os_net_setup_namespace }} openstackclient -- openstack network delete {{ item.name }} fi ] *** 2025-12-13 07:31:32,951 p=38280 u=zuul n=ansible | Saturday 13 December 2025 07:31:32 +0000 (0:00:04.021) 0:00:07.360 ***** 2025-12-13 07:31:32,951 p=38280 u=zuul n=ansible | Saturday 13 December 2025 07:31:32 +0000 (0:00:04.021) 0:00:07.360 ***** 2025-12-13 07:31:34,967 p=38280 u=zuul n=ansible | changed: [localhost] => (item={'name': 'public', 'external': True, 'shared': False, 'is_default': True, 'provider_network_type': 'flat', 'provider_physical_network': 'datacentre', 'availability_zone_hints': [], 'subnets': [{'name': 'public_subnet', 'cidr': '192.168.122.0/24', 'allocation_pool_start': '192.168.122.171', 'allocation_pool_end': '192.168.122.250', 'gateway_ip': '192.168.122.1', 'enable_dhcp': True}]}) 2025-12-13 07:31:34,978 p=38280 u=zuul n=ansible | TASK [os_net_setup : Print network creation commands msg={{ lookup('ansible.builtin.template', _template_file) }}] *** 2025-12-13 07:31:34,978 p=38280 u=zuul n=ansible | Saturday 13 December 2025 07:31:34 +0000 (0:00:02.027) 0:00:09.388 ***** 2025-12-13 07:31:34,978 p=38280 u=zuul n=ansible | Saturday 13 December 2025 07:31:34 +0000 (0:00:02.027) 0:00:09.387 ***** 2025-12-13 07:31:35,037 p=38280 u=zuul n=ansible | ok: [localhost] => msg: | set -euo pipefail oc exec -n openstack openstackclient -- openstack network create \ --external \ --default \ --provider-network-type flat \ --provider-physical-network datacentre \ --no-share \ public 2025-12-13 07:31:35,046 p=38280 u=zuul n=ansible | TASK [os_net_setup : Create networks _raw_params={{ lookup('ansible.builtin.template', _template_file) }} ] *** 2025-12-13 07:31:35,046 p=38280 u=zuul n=ansible | Saturday 13 December 2025 07:31:35 +0000 (0:00:00.067) 0:00:09.455 ***** 2025-12-13 07:31:35,046 p=38280 u=zuul n=ansible | Saturday 13 December 2025 07:31:35 +0000 (0:00:00.067) 0:00:09.455 ***** 2025-12-13 07:31:37,541 p=38280 u=zuul n=ansible | changed: [localhost] 2025-12-13 07:31:37,550 p=38280 u=zuul n=ansible | TASK [os_net_setup : Print subnet command creation msg={{ lookup('ansible.builtin.template', _template_file) }}] *** 2025-12-13 07:31:37,550 p=38280 u=zuul n=ansible | Saturday 13 December 2025 07:31:37 +0000 (0:00:02.503) 0:00:11.959 ***** 2025-12-13 07:31:37,550 p=38280 u=zuul n=ansible | Saturday 13 December 2025 07:31:37 +0000 (0:00:02.503) 0:00:11.959 ***** 2025-12-13 07:31:37,618 p=38280 u=zuul n=ansible | ok: [localhost] => msg: | set -euo pipefail oc exec -n openstack openstackclient -- openstack subnet create \ --allocation-pool start=192.168.122.171,end=192.168.122.250 \ --subnet-range 192.168.122.0/24 \ --gateway 192.168.122.1 \ --network public \ public_subnet 2025-12-13 07:31:37,626 p=38280 u=zuul n=ansible | TASK [os_net_setup : Create subnets _raw_params={{ lookup('ansible.builtin.template', _template_file) }} ] *** 2025-12-13 07:31:37,626 p=38280 u=zuul n=ansible | Saturday 13 December 2025 07:31:37 +0000 (0:00:00.076) 0:00:12.036 ***** 2025-12-13 07:31:37,626 p=38280 u=zuul n=ansible | Saturday 13 December 2025 07:31:37 +0000 (0:00:00.076) 0:00:12.035 ***** 2025-12-13 07:31:40,405 p=38280 u=zuul n=ansible | changed: [localhost] 2025-12-13 07:31:40,413 p=38280 u=zuul n=ansible | TASK [os_net_setup : Print subnet pools command creation msg={{ lookup('ansible.builtin.template', _template_file) }}] *** 2025-12-13 07:31:40,413 p=38280 u=zuul n=ansible | Saturday 13 December 2025 07:31:40 +0000 (0:00:02.787) 0:00:14.823 ***** 2025-12-13 07:31:40,414 p=38280 u=zuul n=ansible | Saturday 13 December 2025 07:31:40 +0000 (0:00:02.787) 0:00:14.822 ***** 2025-12-13 07:31:40,490 p=38280 u=zuul n=ansible | ok: [localhost] => msg: | set -euo pipefail oc exec -n openstack openstackclient -- openstack subnet pool create \ --default-prefix-length 26 \ --pool-prefix 10.1.0.0/20 \ --default \ --share \ shared-pool-ipv4 oc exec -n openstack openstackclient -- openstack subnet pool create \ --default-prefix-length 64 \ --pool-prefix fdfe:381f:8400::/56 \ --default \ --share \ shared-pool-ipv6 2025-12-13 07:31:40,499 p=38280 u=zuul n=ansible | TASK [os_net_setup : Create subnet pools _raw_params={{ lookup('ansible.builtin.template', _template_file) }} ] *** 2025-12-13 07:31:40,500 p=38280 u=zuul n=ansible | Saturday 13 December 2025 07:31:40 +0000 (0:00:00.086) 0:00:14.909 ***** 2025-12-13 07:31:40,500 p=38280 u=zuul n=ansible | Saturday 13 December 2025 07:31:40 +0000 (0:00:00.086) 0:00:14.909 ***** 2025-12-13 07:31:45,983 p=38280 u=zuul n=ansible | changed: [localhost] 2025-12-13 07:31:45,998 p=38280 u=zuul n=ansible | TASK [run_hook : Assert parameters are valid quiet=True, that=['_list_hooks is not string', '_list_hooks is not mapping', '_list_hooks is iterable', '(hooks | default([])) is not string', '(hooks | default([])) is not mapping', '(hooks | default([])) is iterable']] *** 2025-12-13 07:31:45,998 p=38280 u=zuul n=ansible | Saturday 13 December 2025 07:31:45 +0000 (0:00:05.498) 0:00:20.407 ***** 2025-12-13 07:31:45,998 p=38280 u=zuul n=ansible | Saturday 13 December 2025 07:31:45 +0000 (0:00:05.498) 0:00:20.407 ***** 2025-12-13 07:31:46,045 p=38280 u=zuul n=ansible | ok: [localhost] 2025-12-13 07:31:46,054 p=38280 u=zuul n=ansible | TASK [run_hook : Assert single hooks are all mappings quiet=True, that=['_not_mapping_hooks | length == 0'], msg=All single hooks must be a list of mappings or a mapping.] *** 2025-12-13 07:31:46,054 p=38280 u=zuul n=ansible | Saturday 13 December 2025 07:31:46 +0000 (0:00:00.056) 0:00:20.463 ***** 2025-12-13 07:31:46,054 p=38280 u=zuul n=ansible | Saturday 13 December 2025 07:31:46 +0000 (0:00:00.056) 0:00:20.463 ***** 2025-12-13 07:31:46,119 p=38280 u=zuul n=ansible | ok: [localhost] 2025-12-13 07:31:46,127 p=38280 u=zuul n=ansible | TASK [run_hook : Loop on hooks for post_admin_setup _raw_params={{ hook.type }}.yml] *** 2025-12-13 07:31:46,128 p=38280 u=zuul n=ansible | Saturday 13 December 2025 07:31:46 +0000 (0:00:00.073) 0:00:20.537 ***** 2025-12-13 07:31:46,128 p=38280 u=zuul n=ansible | Saturday 13 December 2025 07:31:46 +0000 (0:00:00.073) 0:00:20.537 ***** 2025-12-13 07:31:46,193 p=38280 u=zuul n=ansible | skipping: [localhost] 2025-12-13 07:31:46,215 p=38280 u=zuul n=ansible | TASK [run_hook : Assert parameters are valid quiet=True, that=['_list_hooks is not string', '_list_hooks is not mapping', '_list_hooks is iterable', '(hooks | default([])) is not string', '(hooks | default([])) is not mapping', '(hooks | default([])) is iterable']] *** 2025-12-13 07:31:46,215 p=38280 u=zuul n=ansible | Saturday 13 December 2025 07:31:46 +0000 (0:00:00.087) 0:00:20.624 ***** 2025-12-13 07:31:46,215 p=38280 u=zuul n=ansible | Saturday 13 December 2025 07:31:46 +0000 (0:00:00.087) 0:00:20.624 ***** 2025-12-13 07:31:46,262 p=38280 u=zuul n=ansible | ok: [localhost] 2025-12-13 07:31:46,270 p=38280 u=zuul n=ansible | TASK [run_hook : Assert single hooks are all mappings quiet=True, that=['_not_mapping_hooks | length == 0'], msg=All single hooks must be a list of mappings or a mapping.] *** 2025-12-13 07:31:46,271 p=38280 u=zuul n=ansible | Saturday 13 December 2025 07:31:46 +0000 (0:00:00.055) 0:00:20.680 ***** 2025-12-13 07:31:46,271 p=38280 u=zuul n=ansible | Saturday 13 December 2025 07:31:46 +0000 (0:00:00.055) 0:00:20.680 ***** 2025-12-13 07:31:46,352 p=38280 u=zuul n=ansible | ok: [localhost] 2025-12-13 07:31:46,362 p=38280 u=zuul n=ansible | TASK [run_hook : Loop on hooks for pre_tests _raw_params={{ hook.type }}.yml] *** 2025-12-13 07:31:46,362 p=38280 u=zuul n=ansible | Saturday 13 December 2025 07:31:46 +0000 (0:00:00.091) 0:00:20.772 ***** 2025-12-13 07:31:46,362 p=38280 u=zuul n=ansible | Saturday 13 December 2025 07:31:46 +0000 (0:00:00.091) 0:00:20.771 ***** 2025-12-13 07:31:46,467 p=38280 u=zuul n=ansible | included: /home/zuul/src/github.com/openstack-k8s-operators/ci-framework/roles/run_hook/tasks/playbook.yml for localhost => (item={'name': '90 Create manila resources', 'type': 'playbook', 'source': 'manila_create_default_resources.yml'}) 2025-12-13 07:31:46,498 p=38280 u=zuul n=ansible | TASK [run_hook : Set playbook path for 90 Create manila resources cifmw_basedir={{ _bdir }}, hook_name={{ _hook_name }}, playbook_path={{ _play | realpath }}, log_path={{ _bdir }}/logs/{{ step }}_{{ _hook_name }}.log, extra_vars=-e operator_namespace={{ _operator_namespace }} -e namespace={{ _namespace}} {%- if hook.extra_vars is defined and hook.extra_vars|length > 0 -%} {% for key,value in hook.extra_vars.items() -%} {%- if key == 'file' %} -e "@{{ value }}" {%- else %} -e "{{ key }}={{ value }}" {%- endif %} {%- endfor %} {%- endif %}] *** 2025-12-13 07:31:46,498 p=38280 u=zuul n=ansible | Saturday 13 December 2025 07:31:46 +0000 (0:00:00.135) 0:00:20.907 ***** 2025-12-13 07:31:46,498 p=38280 u=zuul n=ansible | Saturday 13 December 2025 07:31:46 +0000 (0:00:00.135) 0:00:20.907 ***** 2025-12-13 07:31:46,536 p=38280 u=zuul n=ansible | ok: [localhost] 2025-12-13 07:31:46,544 p=38280 u=zuul n=ansible | TASK [run_hook : Get file stat path={{ playbook_path }}] *********************** 2025-12-13 07:31:46,544 p=38280 u=zuul n=ansible | Saturday 13 December 2025 07:31:46 +0000 (0:00:00.046) 0:00:20.954 ***** 2025-12-13 07:31:46,544 p=38280 u=zuul n=ansible | Saturday 13 December 2025 07:31:46 +0000 (0:00:00.046) 0:00:20.953 ***** 2025-12-13 07:31:46,770 p=38280 u=zuul n=ansible | ok: [localhost] 2025-12-13 07:31:46,779 p=38280 u=zuul n=ansible | TASK [run_hook : Fail if playbook doesn't exist msg=Playbook {{ playbook_path }} doesn't seem to exist.] *** 2025-12-13 07:31:46,779 p=38280 u=zuul n=ansible | Saturday 13 December 2025 07:31:46 +0000 (0:00:00.234) 0:00:21.189 ***** 2025-12-13 07:31:46,779 p=38280 u=zuul n=ansible | Saturday 13 December 2025 07:31:46 +0000 (0:00:00.234) 0:00:21.188 ***** 2025-12-13 07:31:46,789 p=38280 u=zuul n=ansible | skipping: [localhost] 2025-12-13 07:31:46,798 p=38280 u=zuul n=ansible | TASK [run_hook : Get parameters files paths={{ (cifmw_basedir, 'artifacts/parameters') | path_join }}, file_type=file, patterns=*.yml] *** 2025-12-13 07:31:46,798 p=38280 u=zuul n=ansible | Saturday 13 December 2025 07:31:46 +0000 (0:00:00.018) 0:00:21.207 ***** 2025-12-13 07:31:46,798 p=38280 u=zuul n=ansible | Saturday 13 December 2025 07:31:46 +0000 (0:00:00.018) 0:00:21.207 ***** 2025-12-13 07:31:47,019 p=38280 u=zuul n=ansible | ok: [localhost] 2025-12-13 07:31:47,027 p=38280 u=zuul n=ansible | TASK [run_hook : Add parameters artifacts as extra variables extra_vars={{ extra_vars }} {% for file in cifmw_run_hook_parameters_files.files %} -e "@{{ file.path }}" {%- endfor %}] *** 2025-12-13 07:31:47,027 p=38280 u=zuul n=ansible | Saturday 13 December 2025 07:31:47 +0000 (0:00:00.229) 0:00:21.437 ***** 2025-12-13 07:31:47,027 p=38280 u=zuul n=ansible | Saturday 13 December 2025 07:31:47 +0000 (0:00:00.229) 0:00:21.436 ***** 2025-12-13 07:31:47,041 p=38280 u=zuul n=ansible | ok: [localhost] 2025-12-13 07:31:47,049 p=38280 u=zuul n=ansible | TASK [run_hook : Ensure log directory exists path={{ log_path | dirname }}, state=directory, mode=0755] *** 2025-12-13 07:31:47,049 p=38280 u=zuul n=ansible | Saturday 13 December 2025 07:31:47 +0000 (0:00:00.021) 0:00:21.459 ***** 2025-12-13 07:31:47,049 p=38280 u=zuul n=ansible | Saturday 13 December 2025 07:31:47 +0000 (0:00:00.021) 0:00:21.458 ***** 2025-12-13 07:31:47,274 p=38280 u=zuul n=ansible | ok: [localhost] 2025-12-13 07:31:47,282 p=38280 u=zuul n=ansible | TASK [run_hook : Ensure artifacts directory exists path={{ cifmw_basedir }}/artifacts, state=directory, mode=0755] *** 2025-12-13 07:31:47,283 p=38280 u=zuul n=ansible | Saturday 13 December 2025 07:31:47 +0000 (0:00:00.233) 0:00:21.692 ***** 2025-12-13 07:31:47,283 p=38280 u=zuul n=ansible | Saturday 13 December 2025 07:31:47 +0000 (0:00:00.233) 0:00:21.692 ***** 2025-12-13 07:31:47,430 p=38280 u=zuul n=ansible | ok: [localhost] 2025-12-13 07:31:47,441 p=38280 u=zuul n=ansible | TASK [run_hook : Run hook without retry - 90 Create manila resources] ********** 2025-12-13 07:31:47,441 p=38280 u=zuul n=ansible | Saturday 13 December 2025 07:31:47 +0000 (0:00:00.158) 0:00:21.850 ***** 2025-12-13 07:31:47,441 p=38280 u=zuul n=ansible | Saturday 13 December 2025 07:31:47 +0000 (0:00:00.158) 0:00:21.850 ***** 2025-12-13 07:31:47,483 p=38280 u=zuul n=ansible | Follow script's output here: /home/zuul/ci-framework-data/logs/ci_script_024_run_hook_without_retry_90.log 2025-12-13 07:31:54,311 p=38280 u=zuul n=ansible | changed: [localhost] 2025-12-13 07:31:54,320 p=38280 u=zuul n=ansible | TASK [run_hook : Run hook with retry - 90 Create manila resources] ************* 2025-12-13 07:31:54,321 p=38280 u=zuul n=ansible | Saturday 13 December 2025 07:31:54 +0000 (0:00:06.879) 0:00:28.730 ***** 2025-12-13 07:31:54,321 p=38280 u=zuul n=ansible | Saturday 13 December 2025 07:31:54 +0000 (0:00:06.879) 0:00:28.730 ***** 2025-12-13 07:31:54,333 p=38280 u=zuul n=ansible | skipping: [localhost] 2025-12-13 07:31:54,342 p=38280 u=zuul n=ansible | TASK [run_hook : Check if we have a file path={{ cifmw_basedir }}/artifacts/{{ step }}_{{ hook_name }}.yml] *** 2025-12-13 07:31:54,342 p=38280 u=zuul n=ansible | Saturday 13 December 2025 07:31:54 +0000 (0:00:00.021) 0:00:28.752 ***** 2025-12-13 07:31:54,342 p=38280 u=zuul n=ansible | Saturday 13 December 2025 07:31:54 +0000 (0:00:00.021) 0:00:28.751 ***** 2025-12-13 07:31:54,480 p=38280 u=zuul n=ansible | ok: [localhost] 2025-12-13 07:31:54,508 p=38280 u=zuul n=ansible | TASK [run_hook : Load generated content in main playbook file={{ cifmw_basedir }}/artifacts/{{ step }}_{{ hook_name }}.yml] *** 2025-12-13 07:31:54,508 p=38280 u=zuul n=ansible | Saturday 13 December 2025 07:31:54 +0000 (0:00:00.166) 0:00:28.918 ***** 2025-12-13 07:31:54,508 p=38280 u=zuul n=ansible | Saturday 13 December 2025 07:31:54 +0000 (0:00:00.166) 0:00:28.917 ***** 2025-12-13 07:31:54,519 p=38280 u=zuul n=ansible | skipping: [localhost] 2025-12-13 07:31:54,534 p=38280 u=zuul n=ansible | TASK [test_operator : Cleanup previous test-operator resources _raw_params=cleanup.yaml] *** 2025-12-13 07:31:54,534 p=38280 u=zuul n=ansible | Saturday 13 December 2025 07:31:54 +0000 (0:00:00.025) 0:00:28.944 ***** 2025-12-13 07:31:54,534 p=38280 u=zuul n=ansible | Saturday 13 December 2025 07:31:54 +0000 (0:00:00.025) 0:00:28.943 ***** 2025-12-13 07:31:54,548 p=38280 u=zuul n=ansible | skipping: [localhost] 2025-12-13 07:31:54,558 p=38280 u=zuul n=ansible | TASK [test_operator : Ensure test_operator folder exists path={{ cifmw_test_operator_artifacts_basedir }}, state=directory, mode=0755, recurse=True, owner={{ ansible_user | default(lookup('env', 'USER')) }}, group={{ ansible_user | default(lookup('env', 'USER')) }}] *** 2025-12-13 07:31:54,558 p=38280 u=zuul n=ansible | Saturday 13 December 2025 07:31:54 +0000 (0:00:00.023) 0:00:28.968 ***** 2025-12-13 07:31:54,558 p=38280 u=zuul n=ansible | Saturday 13 December 2025 07:31:54 +0000 (0:00:00.023) 0:00:28.967 ***** 2025-12-13 07:31:54,713 p=38280 u=zuul n=ansible | changed: [localhost] 2025-12-13 07:31:54,722 p=38280 u=zuul n=ansible | TASK [test_operator : Get openstack-operator csv information kubeconfig={{ cifmw_openshift_kubeconfig }}, api_key={{ cifmw_openshift_token | default(omit)}}, context={{ cifmw_openshift_context | default(omit)}}, kind=ClusterServiceVersion, api_version=operators.coreos.com/v1alpha1, label_selectors=['operators.coreos.com/openstack-operator.openstack-operators'], namespace={{ cifmw_test_operator_controller_namespace }}] *** 2025-12-13 07:31:54,722 p=38280 u=zuul n=ansible | Saturday 13 December 2025 07:31:54 +0000 (0:00:00.164) 0:00:29.132 ***** 2025-12-13 07:31:54,723 p=38280 u=zuul n=ansible | Saturday 13 December 2025 07:31:54 +0000 (0:00:00.164) 0:00:29.131 ***** 2025-12-13 07:31:55,480 p=38280 u=zuul n=ansible | ok: [localhost] 2025-12-13 07:31:55,496 p=38280 u=zuul n=ansible | TASK [test_operator : Get full name of openstack-operator CSV openstack_operator_csv_name={{ csv_info.resources | map(attribute='metadata.name') | list | first }}] *** 2025-12-13 07:31:55,496 p=38280 u=zuul n=ansible | Saturday 13 December 2025 07:31:55 +0000 (0:00:00.773) 0:00:29.905 ***** 2025-12-13 07:31:55,496 p=38280 u=zuul n=ansible | Saturday 13 December 2025 07:31:55 +0000 (0:00:00.773) 0:00:29.905 ***** 2025-12-13 07:31:55,530 p=38280 u=zuul n=ansible | ok: [localhost] 2025-12-13 07:31:55,540 p=38280 u=zuul n=ansible | TASK [test_operator : Get index of test-operator image _raw_params=set -o pipefail; oc get ClusterServiceVersion {{ openstack_operator_csv_name }} -o json | jq '.spec.install.spec.deployments[0].spec.template.spec.containers[0].env | to_entries[] | select(.value.name == "RELATED_IMAGE_TEST_OPERATOR_MANAGER_IMAGE_URL").key'] *** 2025-12-13 07:31:55,540 p=38280 u=zuul n=ansible | Saturday 13 December 2025 07:31:55 +0000 (0:00:00.044) 0:00:29.950 ***** 2025-12-13 07:31:55,540 p=38280 u=zuul n=ansible | Saturday 13 December 2025 07:31:55 +0000 (0:00:00.044) 0:00:29.949 ***** 2025-12-13 07:31:55,843 p=38280 u=zuul n=ansible | changed: [localhost] 2025-12-13 07:31:55,854 p=38280 u=zuul n=ansible | TASK [test_operator : Patch test-operator version in CSV kubeconfig={{ cifmw_openshift_kubeconfig }}, api_key={{ cifmw_openshift_token | default(omit)}}, context={{ cifmw_openshift_context | default(omit)}}, kind=ClusterServiceVersion, api_version=operators.coreos.com/v1alpha1, namespace={{ cifmw_test_operator_controller_namespace }}, name={{ openstack_operator_csv_name }}, patch=[{'path': '/spec/install/spec/deployments/0/spec/template/spec/containers/0/env/{{ image_index.stdout }}/value', 'value': '{{ cifmw_test_operator_bundle }}', 'op': 'replace'}]] *** 2025-12-13 07:31:55,854 p=38280 u=zuul n=ansible | Saturday 13 December 2025 07:31:55 +0000 (0:00:00.314) 0:00:30.264 ***** 2025-12-13 07:31:55,854 p=38280 u=zuul n=ansible | Saturday 13 December 2025 07:31:55 +0000 (0:00:00.314) 0:00:30.263 ***** 2025-12-13 07:31:56,659 p=38280 u=zuul n=ansible | changed: [localhost] 2025-12-13 07:31:56,673 p=38280 u=zuul n=ansible | TASK [test_operator : Get test-operator-controller-manager pod information kubeconfig={{ cifmw_openshift_kubeconfig }}, api_key={{ cifmw_openshift_token | default(omit)}}, context={{ cifmw_openshift_context | default(omit)}}, kind=Pod, label_selectors=['control-plane=controller-manager', 'openstack.org/operator-name=test'], namespace={{ cifmw_test_operator_controller_namespace }}] *** 2025-12-13 07:31:56,673 p=38280 u=zuul n=ansible | Saturday 13 December 2025 07:31:56 +0000 (0:00:00.818) 0:00:31.082 ***** 2025-12-13 07:31:56,673 p=38280 u=zuul n=ansible | Saturday 13 December 2025 07:31:56 +0000 (0:00:00.818) 0:00:31.082 ***** 2025-12-13 07:31:57,268 p=38280 u=zuul n=ansible | ok: [localhost] 2025-12-13 07:31:57,278 p=38280 u=zuul n=ansible | TASK [test_operator : Get full name of test-operator-controller-manager pod test_operator_controller_name={{ pod_info.resources | map(attribute='metadata.name') | list | first }}] *** 2025-12-13 07:31:57,278 p=38280 u=zuul n=ansible | Saturday 13 December 2025 07:31:57 +0000 (0:00:00.605) 0:00:31.688 ***** 2025-12-13 07:31:57,278 p=38280 u=zuul n=ansible | Saturday 13 December 2025 07:31:57 +0000 (0:00:00.605) 0:00:31.687 ***** 2025-12-13 07:31:57,310 p=38280 u=zuul n=ansible | ok: [localhost] 2025-12-13 07:31:57,318 p=38280 u=zuul n=ansible | TASK [test_operator : Wait until the test-operator-controller-manager is reloaded kubeconfig={{ cifmw_openshift_kubeconfig }}, api_key={{ cifmw_openshift_token | default(omit)}}, context={{ cifmw_openshift_context | default(omit) }}, namespace={{ cifmw_test_operator_controller_namespace }}, kind=Pod] *** 2025-12-13 07:31:57,318 p=38280 u=zuul n=ansible | Saturday 13 December 2025 07:31:57 +0000 (0:00:00.040) 0:00:31.728 ***** 2025-12-13 07:31:57,319 p=38280 u=zuul n=ansible | Saturday 13 December 2025 07:31:57 +0000 (0:00:00.040) 0:00:31.727 ***** 2025-12-13 07:31:58,055 p=38280 u=zuul n=ansible | [WARNING]: conditional statements should not include jinja2 templating delimiters such as {{ }} or {% %}. Found: {{ not ( pod_list.resources | map(attribute='metadata.name') | select('match', test_operator_controller_name) | list ) }} 2025-12-13 07:35:33,312 p=38280 u=zuul n=ansible | fatal: [localhost]: FAILED! => api_found: true attempts: 20 changed: false resources: - apiVersion: v1 kind: Pod metadata: annotations: k8s.ovn.org/pod-networks: '{"default":{"ip_addresses":["10.217.0.53/23"],"mac_address":"0a:58:0a:d9:00:35","gateway_ips":["10.217.0.1"],"routes":[{"dest":"10.217.0.0/22","nextHop":"10.217.0.1"},{"dest":"10.217.4.0/23","nextHop":"10.217.0.1"},{"dest":"169.254.0.5/32","nextHop":"10.217.0.1"},{"dest":"100.64.0.0/16","nextHop":"10.217.0.1"}],"ip_address":"10.217.0.53/23","gateway_ip":"10.217.0.1","role":"primary"}}' k8s.v1.cni.cncf.io/network-status: |- [{ "name": "ovn-kubernetes", "interface": "eth0", "ips": [ "10.217.0.53" ], "mac": "0a:58:0a:d9:00:35", "default": true, "dns": {} }] kubectl.kubernetes.io/default-container: manager openshift.io/scc: anyuid creationTimestamp: '2025-12-13T06:59:15Z' generateName: barbican-operator-controller-manager-95949466- labels: app.kubernetes.io/name: barbican-operator control-plane: controller-manager openstack.org/operator-name: barbican pod-template-hash: '95949466' managedFields: - apiVersion: v1 fieldsType: FieldsV1 fieldsV1: f:metadata: f:annotations: f:k8s.ovn.org/pod-networks: {} manager: crc operation: Update subresource: status time: '2025-12-13T06:59:15Z' - apiVersion: v1 fieldsType: FieldsV1 fieldsV1: f:metadata: f:annotations: .: {} f:kubectl.kubernetes.io/default-container: {} f:generateName: {} f:labels: .: {} f:app.kubernetes.io/name: {} f:control-plane: {} f:openstack.org/operator-name: {} f:pod-template-hash: {} f:ownerReferences: .: {} k:{"uid":"182e1202-66f6-4393-bac9-51a9ecc904de"}: {} f:spec: f:containers: k:{"name":"manager"}: .: {} f:args: {} f:command: {} f:env: .: {} k:{"name":"ENABLE_WEBHOOKS"}: .: {} f:name: {} f:value: {} k:{"name":"LEASE_DURATION"}: .: {} f:name: {} f:value: {} k:{"name":"METRICS_CERTS"}: .: {} f:name: {} f:value: {} k:{"name":"RENEW_DEADLINE"}: .: {} f:name: {} f:value: {} k:{"name":"RETRY_PERIOD"}: .: {} f:name: {} f:value: {} f:image: {} f:imagePullPolicy: {} f:livenessProbe: .: {} f:failureThreshold: {} f:httpGet: .: {} f:path: {} f:port: {} f:scheme: {} f:initialDelaySeconds: {} f:periodSeconds: {} f:successThreshold: {} f:timeoutSeconds: {} f:name: {} f:readinessProbe: .: {} f:failureThreshold: {} f:httpGet: .: {} f:path: {} f:port: {} f:scheme: {} f:initialDelaySeconds: {} f:periodSeconds: {} f:successThreshold: {} f:timeoutSeconds: {} f:resources: .: {} f:limits: .: {} f:cpu: {} f:memory: {} f:requests: .: {} f:cpu: {} f:memory: {} f:securityContext: .: {} f:allowPrivilegeEscalation: {} f:terminationMessagePath: {} f:terminationMessagePolicy: {} f:dnsPolicy: {} f:enableServiceLinks: {} f:restartPolicy: {} f:schedulerName: {} f:securityContext: .: {} f:runAsNonRoot: {} f:serviceAccount: {} f:serviceAccountName: {} f:terminationGracePeriodSeconds: {} f:tolerations: {} manager: kube-controller-manager operation: Update time: '2025-12-13T06:59:15Z' - apiVersion: v1 fieldsType: FieldsV1 fieldsV1: f:metadata: f:annotations: f:k8s.v1.cni.cncf.io/network-status: {} manager: multus-daemon operation: Update subresource: status time: '2025-12-13T06:59:16Z' - apiVersion: v1 fieldsType: FieldsV1 fieldsV1: f:status: f:conditions: k:{"type":"ContainersReady"}: .: {} f:lastProbeTime: {} f:lastTransitionTime: {} f:status: {} f:type: {} k:{"type":"Initialized"}: .: {} f:lastProbeTime: {} f:lastTransitionTime: {} f:status: {} f:type: {} k:{"type":"PodReadyToStartContainers"}: .: {} f:lastProbeTime: {} f:lastTransitionTime: {} f:status: {} f:type: {} k:{"type":"Ready"}: .: {} f:lastProbeTime: {} f:lastTransitionTime: {} f:status: {} f:type: {} f:containerStatuses: {} f:hostIP: {} f:hostIPs: {} f:phase: {} f:podIP: {} f:podIPs: .: {} k:{"ip":"10.217.0.53"}: .: {} f:ip: {} f:startTime: {} manager: kubelet operation: Update subresource: status time: '2025-12-13T06:59:35Z' name: barbican-operator-controller-manager-95949466-9ffgl namespace: openstack-operators ownerReferences: - apiVersion: apps/v1 blockOwnerDeletion: true controller: true kind: ReplicaSet name: barbican-operator-controller-manager-95949466 uid: 182e1202-66f6-4393-bac9-51a9ecc904de resourceVersion: '36592' uid: 3ec726b0-e1c1-497a-9364-f483cdf9b69b spec: containers: - args: - --leader-elect - --health-probe-bind-address=:8081 - --metrics-bind-address=127.0.0.1:8080 command: - /manager env: - name: LEASE_DURATION value: '30' - name: RENEW_DEADLINE value: '20' - name: RETRY_PERIOD value: '5' - name: ENABLE_WEBHOOKS value: 'false' - name: METRICS_CERTS value: 'false' image: quay.io/openstack-k8s-operators/barbican-operator@sha256:f6059a0fbf031d34dcf086d14ce8c0546caeaee23c5780e90b5037c5feee9fea imagePullPolicy: IfNotPresent livenessProbe: failureThreshold: 3 httpGet: path: /healthz port: 8081 scheme: HTTP initialDelaySeconds: 15 periodSeconds: 20 successThreshold: 1 timeoutSeconds: 1 name: manager readinessProbe: failureThreshold: 3 httpGet: path: /readyz port: 8081 scheme: HTTP initialDelaySeconds: 5 periodSeconds: 10 successThreshold: 1 timeoutSeconds: 1 resources: limits: cpu: 500m memory: 512Mi requests: cpu: 10m memory: 256Mi securityContext: allowPrivilegeEscalation: false capabilities: drop: - MKNOD terminationMessagePath: /dev/termination-log terminationMessagePolicy: File volumeMounts: - mountPath: /var/run/secrets/kubernetes.io/serviceaccount name: kube-api-access-s8zz6 readOnly: true dnsPolicy: ClusterFirst enableServiceLinks: true imagePullSecrets: - name: barbican-operator-controller-manager-dockercfg-bf9km nodeName: crc preemptionPolicy: PreemptLowerPriority priority: 0 restartPolicy: Always schedulerName: default-scheduler securityContext: runAsNonRoot: true seLinuxOptions: level: s0:c26,c5 serviceAccount: barbican-operator-controller-manager serviceAccountName: barbican-operator-controller-manager terminationGracePeriodSeconds: 10 tolerations: - effect: NoExecute key: node.kubernetes.io/not-ready operator: Exists tolerationSeconds: 120 - effect: NoExecute key: node.kubernetes.io/unreachable operator: Exists tolerationSeconds: 120 - effect: NoSchedule key: node.kubernetes.io/memory-pressure operator: Exists volumes: - name: kube-api-access-s8zz6 projected: defaultMode: 420 sources: - serviceAccountToken: expirationSeconds: 3607 path: token - configMap: items: - key: ca.crt path: ca.crt name: kube-root-ca.crt - downwardAPI: items: - fieldRef: apiVersion: v1 fieldPath: metadata.namespace path: namespace - configMap: items: - key: service-ca.crt path: service-ca.crt name: openshift-service-ca.crt status: conditions: - lastProbeTime: null lastTransitionTime: '2025-12-13T06:59:25Z' status: 'True' type: PodReadyToStartContainers - lastProbeTime: null lastTransitionTime: '2025-12-13T06:59:15Z' status: 'True' type: Initialized - lastProbeTime: null lastTransitionTime: '2025-12-13T06:59:35Z' status: 'True' type: Ready - lastProbeTime: null lastTransitionTime: '2025-12-13T06:59:35Z' status: 'True' type: ContainersReady - lastProbeTime: null lastTransitionTime: '2025-12-13T06:59:15Z' status: 'True' type: PodScheduled containerStatuses: - containerID: cri-o://a6210a86c2a414b7dbadbb66356b261ff3092b2a968514cf99a9c10b158df2a6 image: quay.io/openstack-k8s-operators/barbican-operator@sha256:f6059a0fbf031d34dcf086d14ce8c0546caeaee23c5780e90b5037c5feee9fea imageID: quay.io/openstack-k8s-operators/barbican-operator@sha256:f6059a0fbf031d34dcf086d14ce8c0546caeaee23c5780e90b5037c5feee9fea lastState: {} name: manager ready: true restartCount: 0 started: true state: running: startedAt: '2025-12-13T06:59:25Z' volumeMounts: - mountPath: /var/run/secrets/kubernetes.io/serviceaccount name: kube-api-access-s8zz6 readOnly: true recursiveReadOnly: Disabled hostIP: 192.168.126.11 hostIPs: - ip: 192.168.126.11 phase: Running podIP: 10.217.0.53 podIPs: - ip: 10.217.0.53 qosClass: Burstable startTime: '2025-12-13T06:59:15Z' - apiVersion: v1 kind: Pod metadata: annotations: k8s.ovn.org/pod-networks: '{"default":{"ip_addresses":["10.217.0.54/23"],"mac_address":"0a:58:0a:d9:00:36","gateway_ips":["10.217.0.1"],"routes":[{"dest":"10.217.0.0/22","nextHop":"10.217.0.1"},{"dest":"10.217.4.0/23","nextHop":"10.217.0.1"},{"dest":"169.254.0.5/32","nextHop":"10.217.0.1"},{"dest":"100.64.0.0/16","nextHop":"10.217.0.1"}],"ip_address":"10.217.0.54/23","gateway_ip":"10.217.0.1","role":"primary"}}' k8s.v1.cni.cncf.io/network-status: |- [{ "name": "ovn-kubernetes", "interface": "eth0", "ips": [ "10.217.0.54" ], "mac": "0a:58:0a:d9:00:36", "default": true, "dns": {} }] kubectl.kubernetes.io/default-container: manager openshift.io/scc: anyuid creationTimestamp: '2025-12-13T06:59:15Z' generateName: cinder-operator-controller-manager-5cf45c46bd- labels: app.kubernetes.io/name: cinder-operator control-plane: controller-manager openstack.org/operator-name: cinder pod-template-hash: 5cf45c46bd managedFields: - apiVersion: v1 fieldsType: FieldsV1 fieldsV1: f:metadata: f:annotations: f:k8s.ovn.org/pod-networks: {} manager: crc operation: Update subresource: status time: '2025-12-13T06:59:15Z' - apiVersion: v1 fieldsType: FieldsV1 fieldsV1: f:metadata: f:annotations: .: {} f:kubectl.kubernetes.io/default-container: {} f:generateName: {} f:labels: .: {} f:app.kubernetes.io/name: {} f:control-plane: {} f:openstack.org/operator-name: {} f:pod-template-hash: {} f:ownerReferences: .: {} k:{"uid":"f850ba9f-71b0-4f5b-a43e-82c972a9c7ee"}: {} f:spec: f:containers: k:{"name":"manager"}: .: {} f:args: {} f:command: {} f:env: .: {} k:{"name":"ENABLE_WEBHOOKS"}: .: {} f:name: {} f:value: {} k:{"name":"LEASE_DURATION"}: .: {} f:name: {} f:value: {} k:{"name":"METRICS_CERTS"}: .: {} f:name: {} f:value: {} k:{"name":"RENEW_DEADLINE"}: .: {} f:name: {} f:value: {} k:{"name":"RETRY_PERIOD"}: .: {} f:name: {} f:value: {} f:image: {} f:imagePullPolicy: {} f:livenessProbe: .: {} f:failureThreshold: {} f:httpGet: .: {} f:path: {} f:port: {} f:scheme: {} f:initialDelaySeconds: {} f:periodSeconds: {} f:successThreshold: {} f:timeoutSeconds: {} f:name: {} f:readinessProbe: .: {} f:failureThreshold: {} f:httpGet: .: {} f:path: {} f:port: {} f:scheme: {} f:initialDelaySeconds: {} f:periodSeconds: {} f:successThreshold: {} f:timeoutSeconds: {} f:resources: .: {} f:limits: .: {} f:cpu: {} f:memory: {} f:requests: .: {} f:cpu: {} f:memory: {} f:securityContext: .: {} f:allowPrivilegeEscalation: {} f:terminationMessagePath: {} f:terminationMessagePolicy: {} f:dnsPolicy: {} f:enableServiceLinks: {} f:restartPolicy: {} f:schedulerName: {} f:securityContext: .: {} f:runAsNonRoot: {} f:serviceAccount: {} f:serviceAccountName: {} f:terminationGracePeriodSeconds: {} f:tolerations: {} manager: kube-controller-manager operation: Update time: '2025-12-13T06:59:15Z' - apiVersion: v1 fieldsType: FieldsV1 fieldsV1: f:metadata: f:annotations: f:k8s.v1.cni.cncf.io/network-status: {} manager: multus-daemon operation: Update subresource: status time: '2025-12-13T06:59:16Z' - apiVersion: v1 fieldsType: FieldsV1 fieldsV1: f:status: f:conditions: k:{"type":"ContainersReady"}: .: {} f:lastProbeTime: {} f:lastTransitionTime: {} f:status: {} f:type: {} k:{"type":"Initialized"}: .: {} f:lastProbeTime: {} f:lastTransitionTime: {} f:status: {} f:type: {} k:{"type":"PodReadyToStartContainers"}: .: {} f:lastProbeTime: {} f:lastTransitionTime: {} f:status: {} f:type: {} k:{"type":"Ready"}: .: {} f:lastProbeTime: {} f:lastTransitionTime: {} f:status: {} f:type: {} f:containerStatuses: {} f:hostIP: {} f:hostIPs: {} f:phase: {} f:podIP: {} f:podIPs: .: {} k:{"ip":"10.217.0.54"}: .: {} f:ip: {} f:startTime: {} manager: kubelet operation: Update subresource: status time: '2025-12-13T06:59:35Z' name: cinder-operator-controller-manager-5cf45c46bd-tndds namespace: openstack-operators ownerReferences: - apiVersion: apps/v1 blockOwnerDeletion: true controller: true kind: ReplicaSet name: cinder-operator-controller-manager-5cf45c46bd uid: f850ba9f-71b0-4f5b-a43e-82c972a9c7ee resourceVersion: '36598' uid: 11c2a2ff-6f82-4b30-909b-f0f8c1e92394 spec: containers: - args: - --leader-elect - --health-probe-bind-address=:8081 - --metrics-bind-address=127.0.0.1:8080 command: - /manager env: - name: LEASE_DURATION value: '30' - name: RENEW_DEADLINE value: '20' - name: RETRY_PERIOD value: '5' - name: ENABLE_WEBHOOKS value: 'false' - name: METRICS_CERTS value: 'false' image: quay.io/openstack-k8s-operators/cinder-operator@sha256:981b6a8f95934a86c5f10ef6e198b07265aeba7f11cf84b9ccd13dfaf06f3ca3 imagePullPolicy: IfNotPresent livenessProbe: failureThreshold: 3 httpGet: path: /healthz port: 8081 scheme: HTTP initialDelaySeconds: 15 periodSeconds: 20 successThreshold: 1 timeoutSeconds: 1 name: manager readinessProbe: failureThreshold: 3 httpGet: path: /readyz port: 8081 scheme: HTTP initialDelaySeconds: 5 periodSeconds: 10 successThreshold: 1 timeoutSeconds: 1 resources: limits: cpu: 500m memory: 512Mi requests: cpu: 10m memory: 256Mi securityContext: allowPrivilegeEscalation: false capabilities: drop: - MKNOD terminationMessagePath: /dev/termination-log terminationMessagePolicy: File volumeMounts: - mountPath: /var/run/secrets/kubernetes.io/serviceaccount name: kube-api-access-9q42t readOnly: true dnsPolicy: ClusterFirst enableServiceLinks: true imagePullSecrets: - name: cinder-operator-controller-manager-dockercfg-m76tf nodeName: crc preemptionPolicy: PreemptLowerPriority priority: 0 restartPolicy: Always schedulerName: default-scheduler securityContext: runAsNonRoot: true seLinuxOptions: level: s0:c26,c5 serviceAccount: cinder-operator-controller-manager serviceAccountName: cinder-operator-controller-manager terminationGracePeriodSeconds: 10 tolerations: - effect: NoExecute key: node.kubernetes.io/not-ready operator: Exists tolerationSeconds: 120 - effect: NoExecute key: node.kubernetes.io/unreachable operator: Exists tolerationSeconds: 120 - effect: NoSchedule key: node.kubernetes.io/memory-pressure operator: Exists volumes: - name: kube-api-access-9q42t projected: defaultMode: 420 sources: - serviceAccountToken: expirationSeconds: 3607 path: token - configMap: items: - key: ca.crt path: ca.crt name: kube-root-ca.crt - downwardAPI: items: - fieldRef: apiVersion: v1 fieldPath: metadata.namespace path: namespace - configMap: items: - key: service-ca.crt path: service-ca.crt name: openshift-service-ca.crt status: conditions: - lastProbeTime: null lastTransitionTime: '2025-12-13T06:59:25Z' status: 'True' type: PodReadyToStartContainers - lastProbeTime: null lastTransitionTime: '2025-12-13T06:59:15Z' status: 'True' type: Initialized - lastProbeTime: null lastTransitionTime: '2025-12-13T06:59:35Z' status: 'True' type: Ready - lastProbeTime: null lastTransitionTime: '2025-12-13T06:59:35Z' status: 'True' type: ContainersReady - lastProbeTime: null lastTransitionTime: '2025-12-13T06:59:15Z' status: 'True' type: PodScheduled containerStatuses: - containerID: cri-o://b2b3d816802f29a546519b2ba700b3cb1590f9bc976c8e080c097e93a81586c5 image: quay.io/openstack-k8s-operators/cinder-operator@sha256:981b6a8f95934a86c5f10ef6e198b07265aeba7f11cf84b9ccd13dfaf06f3ca3 imageID: quay.io/openstack-k8s-operators/cinder-operator@sha256:981b6a8f95934a86c5f10ef6e198b07265aeba7f11cf84b9ccd13dfaf06f3ca3 lastState: {} name: manager ready: true restartCount: 0 started: true state: running: startedAt: '2025-12-13T06:59:25Z' volumeMounts: - mountPath: /var/run/secrets/kubernetes.io/serviceaccount name: kube-api-access-9q42t readOnly: true recursiveReadOnly: Disabled hostIP: 192.168.126.11 hostIPs: - ip: 192.168.126.11 phase: Running podIP: 10.217.0.54 podIPs: - ip: 10.217.0.54 qosClass: Burstable startTime: '2025-12-13T06:59:15Z' - apiVersion: v1 kind: Pod metadata: annotations: k8s.ovn.org/pod-networks: '{"default":{"ip_addresses":["10.217.0.58/23"],"mac_address":"0a:58:0a:d9:00:3a","gateway_ips":["10.217.0.1"],"routes":[{"dest":"10.217.0.0/22","nextHop":"10.217.0.1"},{"dest":"10.217.4.0/23","nextHop":"10.217.0.1"},{"dest":"169.254.0.5/32","nextHop":"10.217.0.1"},{"dest":"100.64.0.0/16","nextHop":"10.217.0.1"}],"ip_address":"10.217.0.58/23","gateway_ip":"10.217.0.1","role":"primary"}}' k8s.v1.cni.cncf.io/network-status: |- [{ "name": "ovn-kubernetes", "interface": "eth0", "ips": [ "10.217.0.58" ], "mac": "0a:58:0a:d9:00:3a", "default": true, "dns": {} }] kubectl.kubernetes.io/default-container: manager openshift.io/scc: anyuid creationTimestamp: '2025-12-13T06:59:15Z' generateName: designate-operator-controller-manager-66f8b87655- labels: app.kubernetes.io/name: designate-operator control-plane: controller-manager openstack.org/operator-name: designate pod-template-hash: 66f8b87655 managedFields: - apiVersion: v1 fieldsType: FieldsV1 fieldsV1: f:metadata: f:annotations: f:k8s.ovn.org/pod-networks: {} manager: crc operation: Update subresource: status time: '2025-12-13T06:59:15Z' - apiVersion: v1 fieldsType: FieldsV1 fieldsV1: f:metadata: f:annotations: .: {} f:kubectl.kubernetes.io/default-container: {} f:generateName: {} f:labels: .: {} f:app.kubernetes.io/name: {} f:control-plane: {} f:openstack.org/operator-name: {} f:pod-template-hash: {} f:ownerReferences: .: {} k:{"uid":"929aeecf-80a0-49cc-bb9f-15f281dfe4f6"}: {} f:spec: f:containers: k:{"name":"manager"}: .: {} f:args: {} f:command: {} f:env: .: {} k:{"name":"ENABLE_WEBHOOKS"}: .: {} f:name: {} f:value: {} k:{"name":"LEASE_DURATION"}: .: {} f:name: {} f:value: {} k:{"name":"METRICS_CERTS"}: .: {} f:name: {} f:value: {} k:{"name":"RENEW_DEADLINE"}: .: {} f:name: {} f:value: {} k:{"name":"RETRY_PERIOD"}: .: {} f:name: {} f:value: {} f:image: {} f:imagePullPolicy: {} f:livenessProbe: .: {} f:failureThreshold: {} f:httpGet: .: {} f:path: {} f:port: {} f:scheme: {} f:initialDelaySeconds: {} f:periodSeconds: {} f:successThreshold: {} f:timeoutSeconds: {} f:name: {} f:readinessProbe: .: {} f:failureThreshold: {} f:httpGet: .: {} f:path: {} f:port: {} f:scheme: {} f:initialDelaySeconds: {} f:periodSeconds: {} f:successThreshold: {} f:timeoutSeconds: {} f:resources: .: {} f:limits: .: {} f:cpu: {} f:memory: {} f:requests: .: {} f:cpu: {} f:memory: {} f:securityContext: .: {} f:allowPrivilegeEscalation: {} f:terminationMessagePath: {} f:terminationMessagePolicy: {} f:dnsPolicy: {} f:enableServiceLinks: {} f:restartPolicy: {} f:schedulerName: {} f:securityContext: .: {} f:runAsNonRoot: {} f:serviceAccount: {} f:serviceAccountName: {} f:terminationGracePeriodSeconds: {} f:tolerations: {} manager: kube-controller-manager operation: Update time: '2025-12-13T06:59:15Z' - apiVersion: v1 fieldsType: FieldsV1 fieldsV1: f:metadata: f:annotations: f:k8s.v1.cni.cncf.io/network-status: {} manager: multus-daemon operation: Update subresource: status time: '2025-12-13T06:59:16Z' - apiVersion: v1 fieldsType: FieldsV1 fieldsV1: f:status: f:conditions: k:{"type":"ContainersReady"}: .: {} f:lastProbeTime: {} f:lastTransitionTime: {} f:status: {} f:type: {} k:{"type":"Initialized"}: .: {} f:lastProbeTime: {} f:lastTransitionTime: {} f:status: {} f:type: {} k:{"type":"PodReadyToStartContainers"}: .: {} f:lastProbeTime: {} f:lastTransitionTime: {} f:status: {} f:type: {} k:{"type":"Ready"}: .: {} f:lastProbeTime: {} f:lastTransitionTime: {} f:status: {} f:type: {} f:containerStatuses: {} f:hostIP: {} f:hostIPs: {} f:phase: {} f:podIP: {} f:podIPs: .: {} k:{"ip":"10.217.0.58"}: .: {} f:ip: {} f:startTime: {} manager: kubelet operation: Update subresource: status time: '2025-12-13T06:59:35Z' name: designate-operator-controller-manager-66f8b87655-h6nxs namespace: openstack-operators ownerReferences: - apiVersion: apps/v1 blockOwnerDeletion: true controller: true kind: ReplicaSet name: designate-operator-controller-manager-66f8b87655 uid: 929aeecf-80a0-49cc-bb9f-15f281dfe4f6 resourceVersion: '36631' uid: 32a38d48-fe84-4ede-860c-ae76de27cbe6 spec: containers: - args: - --leader-elect - --health-probe-bind-address=:8081 - --metrics-bind-address=127.0.0.1:8080 command: - /manager env: - name: LEASE_DURATION value: '30' - name: RENEW_DEADLINE value: '20' - name: RETRY_PERIOD value: '5' - name: ENABLE_WEBHOOKS value: 'false' - name: METRICS_CERTS value: 'false' image: quay.io/openstack-k8s-operators/designate-operator@sha256:900050d3501c0785b227db34b89883efe68247816e5c7427cacb74f8aa10605a imagePullPolicy: IfNotPresent livenessProbe: failureThreshold: 3 httpGet: path: /healthz port: 8081 scheme: HTTP initialDelaySeconds: 15 periodSeconds: 20 successThreshold: 1 timeoutSeconds: 1 name: manager readinessProbe: failureThreshold: 3 httpGet: path: /readyz port: 8081 scheme: HTTP initialDelaySeconds: 5 periodSeconds: 10 successThreshold: 1 timeoutSeconds: 1 resources: limits: cpu: 500m memory: 512Mi requests: cpu: 10m memory: 256Mi securityContext: allowPrivilegeEscalation: false capabilities: drop: - MKNOD terminationMessagePath: /dev/termination-log terminationMessagePolicy: File volumeMounts: - mountPath: /var/run/secrets/kubernetes.io/serviceaccount name: kube-api-access-82lrw readOnly: true dnsPolicy: ClusterFirst enableServiceLinks: true imagePullSecrets: - name: designate-operator-controller-manager-dockercfg-5dm4f nodeName: crc preemptionPolicy: PreemptLowerPriority priority: 0 restartPolicy: Always schedulerName: default-scheduler securityContext: runAsNonRoot: true seLinuxOptions: level: s0:c26,c5 serviceAccount: designate-operator-controller-manager serviceAccountName: designate-operator-controller-manager terminationGracePeriodSeconds: 10 tolerations: - effect: NoExecute key: node.kubernetes.io/not-ready operator: Exists tolerationSeconds: 120 - effect: NoExecute key: node.kubernetes.io/unreachable operator: Exists tolerationSeconds: 120 - effect: NoSchedule key: node.kubernetes.io/memory-pressure operator: Exists volumes: - name: kube-api-access-82lrw projected: defaultMode: 420 sources: - serviceAccountToken: expirationSeconds: 3607 path: token - configMap: items: - key: ca.crt path: ca.crt name: kube-root-ca.crt - downwardAPI: items: - fieldRef: apiVersion: v1 fieldPath: metadata.namespace path: namespace - configMap: items: - key: service-ca.crt path: service-ca.crt name: openshift-service-ca.crt status: conditions: - lastProbeTime: null lastTransitionTime: '2025-12-13T06:59:25Z' status: 'True' type: PodReadyToStartContainers - lastProbeTime: null lastTransitionTime: '2025-12-13T06:59:15Z' status: 'True' type: Initialized - lastProbeTime: null lastTransitionTime: '2025-12-13T06:59:35Z' status: 'True' type: Ready - lastProbeTime: null lastTransitionTime: '2025-12-13T06:59:35Z' status: 'True' type: ContainersReady - lastProbeTime: null lastTransitionTime: '2025-12-13T06:59:15Z' status: 'True' type: PodScheduled containerStatuses: - containerID: cri-o://3ca185c955eeb95c8f8aeaef9ea9e2bc7ca5baddede81970a89b8aedc00b575b image: quay.io/openstack-k8s-operators/designate-operator@sha256:900050d3501c0785b227db34b89883efe68247816e5c7427cacb74f8aa10605a imageID: quay.io/openstack-k8s-operators/designate-operator@sha256:900050d3501c0785b227db34b89883efe68247816e5c7427cacb74f8aa10605a lastState: {} name: manager ready: true restartCount: 0 started: true state: running: startedAt: '2025-12-13T06:59:25Z' volumeMounts: - mountPath: /var/run/secrets/kubernetes.io/serviceaccount name: kube-api-access-82lrw readOnly: true recursiveReadOnly: Disabled hostIP: 192.168.126.11 hostIPs: - ip: 192.168.126.11 phase: Running podIP: 10.217.0.58 podIPs: - ip: 10.217.0.58 qosClass: Burstable startTime: '2025-12-13T06:59:15Z' - apiVersion: v1 kind: Pod metadata: annotations: k8s.ovn.org/pod-networks: '{"default":{"ip_addresses":["10.217.0.51/23"],"mac_address":"0a:58:0a:d9:00:33","gateway_ips":["10.217.0.1"],"routes":[{"dest":"10.217.0.0/22","nextHop":"10.217.0.1"},{"dest":"10.217.4.0/23","nextHop":"10.217.0.1"},{"dest":"169.254.0.5/32","nextHop":"10.217.0.1"},{"dest":"100.64.0.0/16","nextHop":"10.217.0.1"}],"ip_address":"10.217.0.51/23","gateway_ip":"10.217.0.1","role":"primary"}}' k8s.v1.cni.cncf.io/network-status: |- [{ "name": "ovn-kubernetes", "interface": "eth0", "ips": [ "10.217.0.51" ], "mac": "0a:58:0a:d9:00:33", "default": true, "dns": {} }] openshift.io/scc: restricted-v2 seccomp.security.alpha.kubernetes.io/pod: runtime/default creationTimestamp: '2025-12-13T06:58:15Z' generateName: ea5d71784a81c95dff2031a02f0a0b3f756f86f14acad8f152d938f56f56fea- labels: batch.kubernetes.io/controller-uid: c7c8d781-1363-428a-996f-604d42f87d6a batch.kubernetes.io/job-name: ea5d71784a81c95dff2031a02f0a0b3f756f86f14acad8f152d938f56f56fea controller-uid: c7c8d781-1363-428a-996f-604d42f87d6a job-name: ea5d71784a81c95dff2031a02f0a0b3f756f86f14acad8f152d938f56f56fea olm.managed: 'true' managedFields: - apiVersion: v1 fieldsType: FieldsV1 fieldsV1: f:metadata: f:annotations: f:k8s.ovn.org/pod-networks: {} manager: crc operation: Update subresource: status time: '2025-12-13T06:58:15Z' - apiVersion: v1 fieldsType: FieldsV1 fieldsV1: f:metadata: f:generateName: {} f:labels: .: {} f:batch.kubernetes.io/controller-uid: {} f:batch.kubernetes.io/job-name: {} f:controller-uid: {} f:job-name: {} f:olm.managed: {} f:ownerReferences: .: {} k:{"uid":"c7c8d781-1363-428a-996f-604d42f87d6a"}: {} f:spec: f:containers: k:{"name":"extract"}: .: {} f:command: {} f:env: .: {} k:{"name":"CONTAINER_IMAGE"}: .: {} f:name: {} f:value: {} f:image: {} f:imagePullPolicy: {} f:name: {} f:resources: .: {} f:requests: .: {} f:cpu: {} f:memory: {} f:securityContext: .: {} f:allowPrivilegeEscalation: {} f:capabilities: .: {} f:drop: {} f:terminationMessagePath: {} f:terminationMessagePolicy: {} f:volumeMounts: .: {} k:{"mountPath":"/bundle"}: .: {} f:mountPath: {} f:name: {} f:dnsPolicy: {} f:enableServiceLinks: {} f:initContainers: .: {} k:{"name":"pull"}: .: {} f:command: {} f:image: {} f:imagePullPolicy: {} f:name: {} f:resources: .: {} f:requests: .: {} f:cpu: {} f:memory: {} f:securityContext: .: {} f:allowPrivilegeEscalation: {} f:capabilities: .: {} f:drop: {} f:terminationMessagePath: {} f:terminationMessagePolicy: {} f:volumeMounts: .: {} k:{"mountPath":"/bundle"}: .: {} f:mountPath: {} f:name: {} k:{"mountPath":"/util"}: .: {} f:mountPath: {} f:name: {} k:{"name":"util"}: .: {} f:command: {} f:image: {} f:imagePullPolicy: {} f:name: {} f:resources: .: {} f:requests: .: {} f:cpu: {} f:memory: {} f:securityContext: .: {} f:allowPrivilegeEscalation: {} f:capabilities: .: {} f:drop: {} f:terminationMessagePath: {} f:terminationMessagePolicy: {} f:volumeMounts: .: {} k:{"mountPath":"/util"}: .: {} f:mountPath: {} f:name: {} f:nodeSelector: {} f:restartPolicy: {} f:schedulerName: {} f:securityContext: .: {} f:seccompProfile: .: {} f:type: {} f:terminationGracePeriodSeconds: {} f:tolerations: {} f:volumes: .: {} k:{"name":"bundle"}: .: {} f:emptyDir: {} f:name: {} k:{"name":"util"}: .: {} f:emptyDir: {} f:name: {} manager: kube-controller-manager operation: Update time: '2025-12-13T06:58:15Z' - apiVersion: v1 fieldsType: FieldsV1 fieldsV1: f:metadata: f:annotations: f:k8s.v1.cni.cncf.io/network-status: {} manager: multus-daemon operation: Update subresource: status time: '2025-12-13T06:58:16Z' - apiVersion: v1 fieldsType: FieldsV1 fieldsV1: f:status: f:conditions: k:{"type":"ContainersReady"}: .: {} f:lastProbeTime: {} f:lastTransitionTime: {} f:reason: {} f:status: {} f:type: {} k:{"type":"Initialized"}: .: {} f:lastProbeTime: {} f:lastTransitionTime: {} f:reason: {} f:status: {} f:type: {} k:{"type":"PodReadyToStartContainers"}: .: {} f:lastProbeTime: {} f:lastTransitionTime: {} f:status: {} f:type: {} k:{"type":"Ready"}: .: {} f:lastProbeTime: {} f:lastTransitionTime: {} f:reason: {} f:status: {} f:type: {} f:containerStatuses: {} f:hostIP: {} f:hostIPs: {} f:initContainerStatuses: {} f:phase: {} f:podIP: {} f:podIPs: .: {} k:{"ip":"10.217.0.51"}: .: {} f:ip: {} f:startTime: {} manager: kubelet operation: Update subresource: status time: '2025-12-13T06:58:20Z' name: ea5d71784a81c95dff2031a02f0a0b3f756f86f14acad8f152d938f56fsdfcx namespace: openstack-operators ownerReferences: - apiVersion: batch/v1 blockOwnerDeletion: true controller: true kind: Job name: ea5d71784a81c95dff2031a02f0a0b3f756f86f14acad8f152d938f56f56fea uid: c7c8d781-1363-428a-996f-604d42f87d6a resourceVersion: '34099' uid: 69c15b9d-0c5d-472e-9a3f-b9b442ca557c spec: containers: - command: - opm - alpha - bundle - extract - -m - /bundle/ - -n - openstack-operators - -c - ea5d71784a81c95dff2031a02f0a0b3f756f86f14acad8f152d938f56f56fea - -z env: - name: CONTAINER_IMAGE value: quay.io/openstack-k8s-operators/openstack-operator-bundle:0b7b865c642a4e7d0dba878ba5b0b58c4c8afc46 image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad imagePullPolicy: IfNotPresent name: extract resources: requests: cpu: 10m memory: 50Mi securityContext: allowPrivilegeEscalation: false capabilities: drop: - ALL runAsNonRoot: true runAsUser: 1000660000 terminationMessagePath: /dev/termination-log terminationMessagePolicy: FallbackToLogsOnError volumeMounts: - mountPath: /bundle name: bundle - mountPath: /var/run/secrets/kubernetes.io/serviceaccount name: kube-api-access-rx9f6 readOnly: true dnsPolicy: ClusterFirst enableServiceLinks: true imagePullSecrets: - name: default-dockercfg-5n9fn initContainers: - command: - /bin/cp - -Rv - /bin/cpb - /util/cpb image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2 imagePullPolicy: IfNotPresent name: util resources: requests: cpu: 10m memory: 50Mi securityContext: allowPrivilegeEscalation: false capabilities: drop: - ALL runAsNonRoot: true runAsUser: 1000660000 terminationMessagePath: /dev/termination-log terminationMessagePolicy: FallbackToLogsOnError volumeMounts: - mountPath: /util name: util - mountPath: /var/run/secrets/kubernetes.io/serviceaccount name: kube-api-access-rx9f6 readOnly: true - command: - /util/cpb - /bundle image: quay.io/openstack-k8s-operators/openstack-operator-bundle:0b7b865c642a4e7d0dba878ba5b0b58c4c8afc46 imagePullPolicy: Always name: pull resources: requests: cpu: 10m memory: 50Mi securityContext: allowPrivilegeEscalation: false capabilities: drop: - ALL runAsNonRoot: true runAsUser: 1000660000 terminationMessagePath: /dev/termination-log terminationMessagePolicy: FallbackToLogsOnError volumeMounts: - mountPath: /bundle name: bundle - mountPath: /util name: util - mountPath: /var/run/secrets/kubernetes.io/serviceaccount name: kube-api-access-rx9f6 readOnly: true nodeName: crc nodeSelector: kubernetes.io/os: linux preemptionPolicy: PreemptLowerPriority priority: 0 restartPolicy: Never schedulerName: default-scheduler securityContext: fsGroup: 1000660000 seLinuxOptions: level: s0:c26,c5 seccompProfile: type: RuntimeDefault serviceAccount: default serviceAccountName: default terminationGracePeriodSeconds: 30 tolerations: - key: kubernetes.io/arch operator: Equal value: amd64 - key: kubernetes.io/arch operator: Equal value: arm64 - key: kubernetes.io/arch operator: Equal value: ppc64le - key: kubernetes.io/arch operator: Equal value: s390x - effect: NoExecute key: node.kubernetes.io/not-ready operator: Exists tolerationSeconds: 300 - effect: NoExecute key: node.kubernetes.io/unreachable operator: Exists tolerationSeconds: 300 - effect: NoSchedule key: node.kubernetes.io/memory-pressure operator: Exists volumes: - emptyDir: {} name: bundle - emptyDir: {} name: util - name: kube-api-access-rx9f6 projected: defaultMode: 420 sources: - serviceAccountToken: expirationSeconds: 3607 path: token - configMap: items: - key: ca.crt path: ca.crt name: kube-root-ca.crt - downwardAPI: items: - fieldRef: apiVersion: v1 fieldPath: metadata.namespace path: namespace - configMap: items: - key: service-ca.crt path: service-ca.crt name: openshift-service-ca.crt status: conditions: - lastProbeTime: null lastTransitionTime: '2025-12-13T06:58:20Z' status: 'False' type: PodReadyToStartContainers - lastProbeTime: null lastTransitionTime: '2025-12-13T06:58:18Z' reason: PodCompleted status: 'True' type: Initialized - lastProbeTime: null lastTransitionTime: '2025-12-13T06:58:15Z' reason: PodCompleted status: 'False' type: Ready - lastProbeTime: null lastTransitionTime: '2025-12-13T06:58:15Z' reason: PodCompleted status: 'False' type: ContainersReady - lastProbeTime: null lastTransitionTime: '2025-12-13T06:58:15Z' status: 'True' type: PodScheduled containerStatuses: - containerID: cri-o://ff2a790534b05bf86adadd4bf2129f9d6d38046b16261bfe6f1c73c253bf1014 image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad imageID: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad lastState: {} name: extract ready: false restartCount: 0 started: false state: terminated: containerID: cri-o://ff2a790534b05bf86adadd4bf2129f9d6d38046b16261bfe6f1c73c253bf1014 exitCode: 0 finishedAt: '2025-12-13T06:58:18Z' reason: Completed startedAt: '2025-12-13T06:58:18Z' volumeMounts: - mountPath: /bundle name: bundle - mountPath: /var/run/secrets/kubernetes.io/serviceaccount name: kube-api-access-rx9f6 readOnly: true recursiveReadOnly: Disabled hostIP: 192.168.126.11 hostIPs: - ip: 192.168.126.11 initContainerStatuses: - containerID: cri-o://1d37ca69162c9cd2d4fa26b8a084c1548ffbe7f0816b2c7d00f6dba79910c2b5 image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2 imageID: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2 lastState: {} name: util ready: true restartCount: 0 started: false state: terminated: containerID: cri-o://1d37ca69162c9cd2d4fa26b8a084c1548ffbe7f0816b2c7d00f6dba79910c2b5 exitCode: 0 finishedAt: '2025-12-13T06:58:16Z' reason: Completed startedAt: '2025-12-13T06:58:16Z' volumeMounts: - mountPath: /util name: util - mountPath: /var/run/secrets/kubernetes.io/serviceaccount name: kube-api-access-rx9f6 readOnly: true recursiveReadOnly: Disabled - containerID: cri-o://02fc8e15651cdb64c169ccf962dbb1afe531861be5dc5f1f90d48bf2dbb9cc96 image: quay.io/openstack-k8s-operators/openstack-operator-bundle:0b7b865c642a4e7d0dba878ba5b0b58c4c8afc46 imageID: quay.io/openstack-k8s-operators/openstack-operator-bundle@sha256:bad4748f736241c3188e2afe063e4a2994fbf0024ba9ec54233d589ad10290c7 lastState: {} name: pull ready: true restartCount: 0 started: false state: terminated: containerID: cri-o://02fc8e15651cdb64c169ccf962dbb1afe531861be5dc5f1f90d48bf2dbb9cc96 exitCode: 0 finishedAt: '2025-12-13T06:58:18Z' reason: Completed startedAt: '2025-12-13T06:58:18Z' volumeMounts: - mountPath: /bundle name: bundle - mountPath: /util name: util - mountPath: /var/run/secrets/kubernetes.io/serviceaccount name: kube-api-access-rx9f6 readOnly: true recursiveReadOnly: Disabled phase: Succeeded podIP: 10.217.0.51 podIPs: - ip: 10.217.0.51 qosClass: Burstable startTime: '2025-12-13T06:58:15Z' - apiVersion: v1 kind: Pod metadata: annotations: k8s.ovn.org/pod-networks: '{"default":{"ip_addresses":["10.217.0.55/23"],"mac_address":"0a:58:0a:d9:00:37","gateway_ips":["10.217.0.1"],"routes":[{"dest":"10.217.0.0/22","nextHop":"10.217.0.1"},{"dest":"10.217.4.0/23","nextHop":"10.217.0.1"},{"dest":"169.254.0.5/32","nextHop":"10.217.0.1"},{"dest":"100.64.0.0/16","nextHop":"10.217.0.1"}],"ip_address":"10.217.0.55/23","gateway_ip":"10.217.0.1","role":"primary"}}' k8s.v1.cni.cncf.io/network-status: |- [{ "name": "ovn-kubernetes", "interface": "eth0", "ips": [ "10.217.0.55" ], "mac": "0a:58:0a:d9:00:37", "default": true, "dns": {} }] kubectl.kubernetes.io/default-container: manager openshift.io/scc: anyuid creationTimestamp: '2025-12-13T06:59:15Z' generateName: glance-operator-controller-manager-767f9d7567- labels: app.kubernetes.io/name: glance-operator control-plane: controller-manager openstack.org/operator-name: glance pod-template-hash: 767f9d7567 managedFields: - apiVersion: v1 fieldsType: FieldsV1 fieldsV1: f:metadata: f:annotations: f:k8s.ovn.org/pod-networks: {} manager: crc operation: Update subresource: status time: '2025-12-13T06:59:15Z' - apiVersion: v1 fieldsType: FieldsV1 fieldsV1: f:metadata: f:annotations: .: {} f:kubectl.kubernetes.io/default-container: {} f:generateName: {} f:labels: .: {} f:app.kubernetes.io/name: {} f:control-plane: {} f:openstack.org/operator-name: {} f:pod-template-hash: {} f:ownerReferences: .: {} k:{"uid":"79f8a188-04ba-4198-bc62-a9ed36e84fa5"}: {} f:spec: f:containers: k:{"name":"manager"}: .: {} f:args: {} f:command: {} f:env: .: {} k:{"name":"ENABLE_WEBHOOKS"}: .: {} f:name: {} f:value: {} k:{"name":"LEASE_DURATION"}: .: {} f:name: {} f:value: {} k:{"name":"METRICS_CERTS"}: .: {} f:name: {} f:value: {} k:{"name":"RENEW_DEADLINE"}: .: {} f:name: {} f:value: {} k:{"name":"RETRY_PERIOD"}: .: {} f:name: {} f:value: {} f:image: {} f:imagePullPolicy: {} f:livenessProbe: .: {} f:failureThreshold: {} f:httpGet: .: {} f:path: {} f:port: {} f:scheme: {} f:initialDelaySeconds: {} f:periodSeconds: {} f:successThreshold: {} f:timeoutSeconds: {} f:name: {} f:readinessProbe: .: {} f:failureThreshold: {} f:httpGet: .: {} f:path: {} f:port: {} f:scheme: {} f:initialDelaySeconds: {} f:periodSeconds: {} f:successThreshold: {} f:timeoutSeconds: {} f:resources: .: {} f:limits: .: {} f:cpu: {} f:memory: {} f:requests: .: {} f:cpu: {} f:memory: {} f:securityContext: .: {} f:allowPrivilegeEscalation: {} f:terminationMessagePath: {} f:terminationMessagePolicy: {} f:dnsPolicy: {} f:enableServiceLinks: {} f:restartPolicy: {} f:schedulerName: {} f:securityContext: .: {} f:runAsNonRoot: {} f:serviceAccount: {} f:serviceAccountName: {} f:terminationGracePeriodSeconds: {} f:tolerations: {} manager: kube-controller-manager operation: Update time: '2025-12-13T06:59:15Z' - apiVersion: v1 fieldsType: FieldsV1 fieldsV1: f:metadata: f:annotations: f:k8s.v1.cni.cncf.io/network-status: {} manager: multus-daemon operation: Update subresource: status time: '2025-12-13T06:59:16Z' - apiVersion: v1 fieldsType: FieldsV1 fieldsV1: f:status: f:conditions: k:{"type":"ContainersReady"}: .: {} f:lastProbeTime: {} f:lastTransitionTime: {} f:status: {} f:type: {} k:{"type":"Initialized"}: .: {} f:lastProbeTime: {} f:lastTransitionTime: {} f:status: {} f:type: {} k:{"type":"PodReadyToStartContainers"}: .: {} f:lastProbeTime: {} f:lastTransitionTime: {} f:status: {} f:type: {} k:{"type":"Ready"}: .: {} f:lastProbeTime: {} f:lastTransitionTime: {} f:status: {} f:type: {} f:containerStatuses: {} f:hostIP: {} f:hostIPs: {} f:phase: {} f:podIP: {} f:podIPs: .: {} k:{"ip":"10.217.0.55"}: .: {} f:ip: {} f:startTime: {} manager: kubelet operation: Update subresource: status time: '2025-12-13T06:59:35Z' name: glance-operator-controller-manager-767f9d7567-z2xnf namespace: openstack-operators ownerReferences: - apiVersion: apps/v1 blockOwnerDeletion: true controller: true kind: ReplicaSet name: glance-operator-controller-manager-767f9d7567 uid: 79f8a188-04ba-4198-bc62-a9ed36e84fa5 resourceVersion: '36641' uid: fd6f17a4-40cc-4465-8c67-58c67230344d spec: containers: - args: - --leader-elect - --health-probe-bind-address=:8081 - --metrics-bind-address=127.0.0.1:8080 command: - /manager env: - name: LEASE_DURATION value: '30' - name: RENEW_DEADLINE value: '20' - name: RETRY_PERIOD value: '5' - name: ENABLE_WEBHOOKS value: 'false' - name: METRICS_CERTS value: 'false' image: quay.io/openstack-k8s-operators/glance-operator@sha256:5370dc4a8e776923eec00bb50cbdb2e390e9dde50be26bdc04a216bd2d6b5027 imagePullPolicy: IfNotPresent livenessProbe: failureThreshold: 3 httpGet: path: /healthz port: 8081 scheme: HTTP initialDelaySeconds: 15 periodSeconds: 20 successThreshold: 1 timeoutSeconds: 1 name: manager readinessProbe: failureThreshold: 3 httpGet: path: /readyz port: 8081 scheme: HTTP initialDelaySeconds: 5 periodSeconds: 10 successThreshold: 1 timeoutSeconds: 1 resources: limits: cpu: 500m memory: 512Mi requests: cpu: 10m memory: 256Mi securityContext: allowPrivilegeEscalation: false capabilities: drop: - MKNOD terminationMessagePath: /dev/termination-log terminationMessagePolicy: File volumeMounts: - mountPath: /var/run/secrets/kubernetes.io/serviceaccount name: kube-api-access-j4m5v readOnly: true dnsPolicy: ClusterFirst enableServiceLinks: true imagePullSecrets: - name: glance-operator-controller-manager-dockercfg-v66gt nodeName: crc preemptionPolicy: PreemptLowerPriority priority: 0 restartPolicy: Always schedulerName: default-scheduler securityContext: runAsNonRoot: true seLinuxOptions: level: s0:c26,c5 serviceAccount: glance-operator-controller-manager serviceAccountName: glance-operator-controller-manager terminationGracePeriodSeconds: 10 tolerations: - effect: NoExecute key: node.kubernetes.io/not-ready operator: Exists tolerationSeconds: 120 - effect: NoExecute key: node.kubernetes.io/unreachable operator: Exists tolerationSeconds: 120 - effect: NoSchedule key: node.kubernetes.io/memory-pressure operator: Exists volumes: - name: kube-api-access-j4m5v projected: defaultMode: 420 sources: - serviceAccountToken: expirationSeconds: 3607 path: token - configMap: items: - key: ca.crt path: ca.crt name: kube-root-ca.crt - downwardAPI: items: - fieldRef: apiVersion: v1 fieldPath: metadata.namespace path: namespace - configMap: items: - key: service-ca.crt path: service-ca.crt name: openshift-service-ca.crt status: conditions: - lastProbeTime: null lastTransitionTime: '2025-12-13T06:59:25Z' status: 'True' type: PodReadyToStartContainers - lastProbeTime: null lastTransitionTime: '2025-12-13T06:59:15Z' status: 'True' type: Initialized - lastProbeTime: null lastTransitionTime: '2025-12-13T06:59:35Z' status: 'True' type: Ready - lastProbeTime: null lastTransitionTime: '2025-12-13T06:59:35Z' status: 'True' type: ContainersReady - lastProbeTime: null lastTransitionTime: '2025-12-13T06:59:15Z' status: 'True' type: PodScheduled containerStatuses: - containerID: cri-o://9cabb9a74f07169d5610aab0a775da19b940de4c40bd6f6e5e89a2d444aad29b image: quay.io/openstack-k8s-operators/glance-operator@sha256:5370dc4a8e776923eec00bb50cbdb2e390e9dde50be26bdc04a216bd2d6b5027 imageID: quay.io/openstack-k8s-operators/glance-operator@sha256:5370dc4a8e776923eec00bb50cbdb2e390e9dde50be26bdc04a216bd2d6b5027 lastState: {} name: manager ready: true restartCount: 0 started: true state: running: startedAt: '2025-12-13T06:59:25Z' volumeMounts: - mountPath: /var/run/secrets/kubernetes.io/serviceaccount name: kube-api-access-j4m5v readOnly: true recursiveReadOnly: Disabled hostIP: 192.168.126.11 hostIPs: - ip: 192.168.126.11 phase: Running podIP: 10.217.0.55 podIPs: - ip: 10.217.0.55 qosClass: Burstable startTime: '2025-12-13T06:59:15Z' - apiVersion: v1 kind: Pod metadata: annotations: k8s.ovn.org/pod-networks: '{"default":{"ip_addresses":["10.217.0.57/23"],"mac_address":"0a:58:0a:d9:00:39","gateway_ips":["10.217.0.1"],"routes":[{"dest":"10.217.0.0/22","nextHop":"10.217.0.1"},{"dest":"10.217.4.0/23","nextHop":"10.217.0.1"},{"dest":"169.254.0.5/32","nextHop":"10.217.0.1"},{"dest":"100.64.0.0/16","nextHop":"10.217.0.1"}],"ip_address":"10.217.0.57/23","gateway_ip":"10.217.0.1","role":"primary"}}' k8s.v1.cni.cncf.io/network-status: |- [{ "name": "ovn-kubernetes", "interface": "eth0", "ips": [ "10.217.0.57" ], "mac": "0a:58:0a:d9:00:39", "default": true, "dns": {} }] kubectl.kubernetes.io/default-container: manager openshift.io/scc: anyuid creationTimestamp: '2025-12-13T06:59:15Z' generateName: heat-operator-controller-manager-59b8dcb766- labels: app.kubernetes.io/name: heat-operator control-plane: controller-manager openstack.org/operator-name: heat pod-template-hash: 59b8dcb766 managedFields: - apiVersion: v1 fieldsType: FieldsV1 fieldsV1: f:metadata: f:annotations: f:k8s.ovn.org/pod-networks: {} manager: crc operation: Update subresource: status time: '2025-12-13T06:59:15Z' - apiVersion: v1 fieldsType: FieldsV1 fieldsV1: f:metadata: f:annotations: .: {} f:kubectl.kubernetes.io/default-container: {} f:generateName: {} f:labels: .: {} f:app.kubernetes.io/name: {} f:control-plane: {} f:openstack.org/operator-name: {} f:pod-template-hash: {} f:ownerReferences: .: {} k:{"uid":"abec1725-d1a8-418c-98d1-f9c67d2b6af1"}: {} f:spec: f:containers: k:{"name":"manager"}: .: {} f:args: {} f:command: {} f:env: .: {} k:{"name":"ENABLE_WEBHOOKS"}: .: {} f:name: {} f:value: {} k:{"name":"LEASE_DURATION"}: .: {} f:name: {} f:value: {} k:{"name":"METRICS_CERTS"}: .: {} f:name: {} f:value: {} k:{"name":"RENEW_DEADLINE"}: .: {} f:name: {} f:value: {} k:{"name":"RETRY_PERIOD"}: .: {} f:name: {} f:value: {} f:image: {} f:imagePullPolicy: {} f:livenessProbe: .: {} f:failureThreshold: {} f:httpGet: .: {} f:path: {} f:port: {} f:scheme: {} f:initialDelaySeconds: {} f:periodSeconds: {} f:successThreshold: {} f:timeoutSeconds: {} f:name: {} f:readinessProbe: .: {} f:failureThreshold: {} f:httpGet: .: {} f:path: {} f:port: {} f:scheme: {} f:initialDelaySeconds: {} f:periodSeconds: {} f:successThreshold: {} f:timeoutSeconds: {} f:resources: .: {} f:limits: .: {} f:cpu: {} f:memory: {} f:requests: .: {} f:cpu: {} f:memory: {} f:securityContext: .: {} f:allowPrivilegeEscalation: {} f:terminationMessagePath: {} f:terminationMessagePolicy: {} f:dnsPolicy: {} f:enableServiceLinks: {} f:restartPolicy: {} f:schedulerName: {} f:securityContext: .: {} f:runAsNonRoot: {} f:serviceAccount: {} f:serviceAccountName: {} f:terminationGracePeriodSeconds: {} f:tolerations: {} manager: kube-controller-manager operation: Update time: '2025-12-13T06:59:15Z' - apiVersion: v1 fieldsType: FieldsV1 fieldsV1: f:metadata: f:annotations: f:k8s.v1.cni.cncf.io/network-status: {} manager: multus-daemon operation: Update subresource: status time: '2025-12-13T06:59:16Z' - apiVersion: v1 fieldsType: FieldsV1 fieldsV1: f:status: f:conditions: k:{"type":"ContainersReady"}: .: {} f:lastProbeTime: {} f:lastTransitionTime: {} f:status: {} f:type: {} k:{"type":"Initialized"}: .: {} f:lastProbeTime: {} f:lastTransitionTime: {} f:status: {} f:type: {} k:{"type":"PodReadyToStartContainers"}: .: {} f:lastProbeTime: {} f:lastTransitionTime: {} f:status: {} f:type: {} k:{"type":"Ready"}: .: {} f:lastProbeTime: {} f:lastTransitionTime: {} f:status: {} f:type: {} f:containerStatuses: {} f:hostIP: {} f:hostIPs: {} f:phase: {} f:podIP: {} f:podIPs: .: {} k:{"ip":"10.217.0.57"}: .: {} f:ip: {} f:startTime: {} manager: kubelet operation: Update subresource: status time: '2025-12-13T06:59:35Z' name: heat-operator-controller-manager-59b8dcb766-9m7mp namespace: openstack-operators ownerReferences: - apiVersion: apps/v1 blockOwnerDeletion: true controller: true kind: ReplicaSet name: heat-operator-controller-manager-59b8dcb766 uid: abec1725-d1a8-418c-98d1-f9c67d2b6af1 resourceVersion: '36609' uid: 98e02ffd-3d31-4b00-8bc7-5f225cdf9fc5 spec: containers: - args: - --leader-elect - --health-probe-bind-address=:8081 - --metrics-bind-address=127.0.0.1:8080 command: - /manager env: - name: LEASE_DURATION value: '30' - name: RENEW_DEADLINE value: '20' - name: RETRY_PERIOD value: '5' - name: ENABLE_WEBHOOKS value: 'false' - name: METRICS_CERTS value: 'false' image: quay.io/openstack-k8s-operators/heat-operator@sha256:c4abfc148600dfa85915f3dc911d988ea2335f26cb6b8d749fe79bfe53e5e429 imagePullPolicy: IfNotPresent livenessProbe: failureThreshold: 3 httpGet: path: /healthz port: 8081 scheme: HTTP initialDelaySeconds: 15 periodSeconds: 20 successThreshold: 1 timeoutSeconds: 1 name: manager readinessProbe: failureThreshold: 3 httpGet: path: /readyz port: 8081 scheme: HTTP initialDelaySeconds: 5 periodSeconds: 10 successThreshold: 1 timeoutSeconds: 1 resources: limits: cpu: 500m memory: 512Mi requests: cpu: 10m memory: 256Mi securityContext: allowPrivilegeEscalation: false capabilities: drop: - MKNOD terminationMessagePath: /dev/termination-log terminationMessagePolicy: File volumeMounts: - mountPath: /var/run/secrets/kubernetes.io/serviceaccount name: kube-api-access-8mk54 readOnly: true dnsPolicy: ClusterFirst enableServiceLinks: true imagePullSecrets: - name: heat-operator-controller-manager-dockercfg-pg277 nodeName: crc preemptionPolicy: PreemptLowerPriority priority: 0 restartPolicy: Always schedulerName: default-scheduler securityContext: runAsNonRoot: true seLinuxOptions: level: s0:c26,c5 serviceAccount: heat-operator-controller-manager serviceAccountName: heat-operator-controller-manager terminationGracePeriodSeconds: 10 tolerations: - effect: NoExecute key: node.kubernetes.io/not-ready operator: Exists tolerationSeconds: 120 - effect: NoExecute key: node.kubernetes.io/unreachable operator: Exists tolerationSeconds: 120 - effect: NoSchedule key: node.kubernetes.io/memory-pressure operator: Exists volumes: - name: kube-api-access-8mk54 projected: defaultMode: 420 sources: - serviceAccountToken: expirationSeconds: 3607 path: token - configMap: items: - key: ca.crt path: ca.crt name: kube-root-ca.crt - downwardAPI: items: - fieldRef: apiVersion: v1 fieldPath: metadata.namespace path: namespace - configMap: items: - key: service-ca.crt path: service-ca.crt name: openshift-service-ca.crt status: conditions: - lastProbeTime: null lastTransitionTime: '2025-12-13T06:59:25Z' status: 'True' type: PodReadyToStartContainers - lastProbeTime: null lastTransitionTime: '2025-12-13T06:59:15Z' status: 'True' type: Initialized - lastProbeTime: null lastTransitionTime: '2025-12-13T06:59:35Z' status: 'True' type: Ready - lastProbeTime: null lastTransitionTime: '2025-12-13T06:59:35Z' status: 'True' type: ContainersReady - lastProbeTime: null lastTransitionTime: '2025-12-13T06:59:15Z' status: 'True' type: PodScheduled containerStatuses: - containerID: cri-o://f7f082614d1e2b707d1c2a8c40a5888e593ea21c92a68b469fa3f1eff1a75d6a image: quay.io/openstack-k8s-operators/heat-operator@sha256:c4abfc148600dfa85915f3dc911d988ea2335f26cb6b8d749fe79bfe53e5e429 imageID: quay.io/openstack-k8s-operators/heat-operator@sha256:c4abfc148600dfa85915f3dc911d988ea2335f26cb6b8d749fe79bfe53e5e429 lastState: {} name: manager ready: true restartCount: 0 started: true state: running: startedAt: '2025-12-13T06:59:25Z' volumeMounts: - mountPath: /var/run/secrets/kubernetes.io/serviceaccount name: kube-api-access-8mk54 readOnly: true recursiveReadOnly: Disabled hostIP: 192.168.126.11 hostIPs: - ip: 192.168.126.11 phase: Running podIP: 10.217.0.57 podIPs: - ip: 10.217.0.57 qosClass: Burstable startTime: '2025-12-13T06:59:15Z' - apiVersion: v1 kind: Pod metadata: annotations: k8s.ovn.org/pod-networks: '{"default":{"ip_addresses":["10.217.0.56/23"],"mac_address":"0a:58:0a:d9:00:38","gateway_ips":["10.217.0.1"],"routes":[{"dest":"10.217.0.0/22","nextHop":"10.217.0.1"},{"dest":"10.217.4.0/23","nextHop":"10.217.0.1"},{"dest":"169.254.0.5/32","nextHop":"10.217.0.1"},{"dest":"100.64.0.0/16","nextHop":"10.217.0.1"}],"ip_address":"10.217.0.56/23","gateway_ip":"10.217.0.1","role":"primary"}}' k8s.v1.cni.cncf.io/network-status: |- [{ "name": "ovn-kubernetes", "interface": "eth0", "ips": [ "10.217.0.56" ], "mac": "0a:58:0a:d9:00:38", "default": true, "dns": {} }] kubectl.kubernetes.io/default-container: manager openshift.io/scc: anyuid creationTimestamp: '2025-12-13T06:59:15Z' generateName: horizon-operator-controller-manager-6ccf486b9- labels: app.kubernetes.io/name: horizon-operator control-plane: controller-manager openstack.org/operator-name: horizon pod-template-hash: 6ccf486b9 managedFields: - apiVersion: v1 fieldsType: FieldsV1 fieldsV1: f:metadata: f:annotations: f:k8s.ovn.org/pod-networks: {} manager: crc operation: Update subresource: status time: '2025-12-13T06:59:15Z' - apiVersion: v1 fieldsType: FieldsV1 fieldsV1: f:metadata: f:annotations: .: {} f:kubectl.kubernetes.io/default-container: {} f:generateName: {} f:labels: .: {} f:app.kubernetes.io/name: {} f:control-plane: {} f:openstack.org/operator-name: {} f:pod-template-hash: {} f:ownerReferences: .: {} k:{"uid":"98591077-8585-47ce-8064-295a4c30851c"}: {} f:spec: f:containers: k:{"name":"manager"}: .: {} f:args: {} f:command: {} f:env: .: {} k:{"name":"ENABLE_WEBHOOKS"}: .: {} f:name: {} f:value: {} k:{"name":"LEASE_DURATION"}: .: {} f:name: {} f:value: {} k:{"name":"METRICS_CERTS"}: .: {} f:name: {} f:value: {} k:{"name":"RENEW_DEADLINE"}: .: {} f:name: {} f:value: {} k:{"name":"RETRY_PERIOD"}: .: {} f:name: {} f:value: {} f:image: {} f:imagePullPolicy: {} f:livenessProbe: .: {} f:failureThreshold: {} f:httpGet: .: {} f:path: {} f:port: {} f:scheme: {} f:initialDelaySeconds: {} f:periodSeconds: {} f:successThreshold: {} f:timeoutSeconds: {} f:name: {} f:readinessProbe: .: {} f:failureThreshold: {} f:httpGet: .: {} f:path: {} f:port: {} f:scheme: {} f:initialDelaySeconds: {} f:periodSeconds: {} f:successThreshold: {} f:timeoutSeconds: {} f:resources: .: {} f:limits: .: {} f:cpu: {} f:memory: {} f:requests: .: {} f:cpu: {} f:memory: {} f:securityContext: .: {} f:allowPrivilegeEscalation: {} f:terminationMessagePath: {} f:terminationMessagePolicy: {} f:dnsPolicy: {} f:enableServiceLinks: {} f:restartPolicy: {} f:schedulerName: {} f:securityContext: .: {} f:runAsNonRoot: {} f:serviceAccount: {} f:serviceAccountName: {} f:terminationGracePeriodSeconds: {} f:tolerations: {} manager: kube-controller-manager operation: Update time: '2025-12-13T06:59:15Z' - apiVersion: v1 fieldsType: FieldsV1 fieldsV1: f:metadata: f:annotations: f:k8s.v1.cni.cncf.io/network-status: {} manager: multus-daemon operation: Update subresource: status time: '2025-12-13T06:59:16Z' - apiVersion: v1 fieldsType: FieldsV1 fieldsV1: f:status: f:conditions: k:{"type":"ContainersReady"}: .: {} f:lastProbeTime: {} f:lastTransitionTime: {} f:status: {} f:type: {} k:{"type":"Initialized"}: .: {} f:lastProbeTime: {} f:lastTransitionTime: {} f:status: {} f:type: {} k:{"type":"PodReadyToStartContainers"}: .: {} f:lastProbeTime: {} f:lastTransitionTime: {} f:status: {} f:type: {} k:{"type":"Ready"}: .: {} f:lastProbeTime: {} f:lastTransitionTime: {} f:status: {} f:type: {} f:containerStatuses: {} f:hostIP: {} f:hostIPs: {} f:phase: {} f:podIP: {} f:podIPs: .: {} k:{"ip":"10.217.0.56"}: .: {} f:ip: {} f:startTime: {} manager: kubelet operation: Update subresource: status time: '2025-12-13T06:59:35Z' name: horizon-operator-controller-manager-6ccf486b9-zmcbq namespace: openstack-operators ownerReferences: - apiVersion: apps/v1 blockOwnerDeletion: true controller: true kind: ReplicaSet name: horizon-operator-controller-manager-6ccf486b9 uid: 98591077-8585-47ce-8064-295a4c30851c resourceVersion: '36604' uid: aac5283b-a0c7-4cac-8a72-07ca5444b743 spec: containers: - args: - --leader-elect - --health-probe-bind-address=:8081 - --metrics-bind-address=127.0.0.1:8080 command: - /manager env: - name: LEASE_DURATION value: '30' - name: RENEW_DEADLINE value: '20' - name: RETRY_PERIOD value: '5' - name: ENABLE_WEBHOOKS value: 'false' - name: METRICS_CERTS value: 'false' image: quay.io/openstack-k8s-operators/horizon-operator@sha256:9e847f4dbdea19ab997f32a02b3680a9bd966f9c705911645c3866a19fda9ea5 imagePullPolicy: IfNotPresent livenessProbe: failureThreshold: 3 httpGet: path: /healthz port: 8081 scheme: HTTP initialDelaySeconds: 15 periodSeconds: 20 successThreshold: 1 timeoutSeconds: 1 name: manager readinessProbe: failureThreshold: 3 httpGet: path: /readyz port: 8081 scheme: HTTP initialDelaySeconds: 5 periodSeconds: 10 successThreshold: 1 timeoutSeconds: 1 resources: limits: cpu: 500m memory: 512Mi requests: cpu: 10m memory: 256Mi securityContext: allowPrivilegeEscalation: false capabilities: drop: - MKNOD terminationMessagePath: /dev/termination-log terminationMessagePolicy: File volumeMounts: - mountPath: /var/run/secrets/kubernetes.io/serviceaccount name: kube-api-access-4v444 readOnly: true dnsPolicy: ClusterFirst enableServiceLinks: true imagePullSecrets: - name: horizon-operator-controller-manager-dockercfg-s9gcw nodeName: crc preemptionPolicy: PreemptLowerPriority priority: 0 restartPolicy: Always schedulerName: default-scheduler securityContext: runAsNonRoot: true seLinuxOptions: level: s0:c26,c5 serviceAccount: horizon-operator-controller-manager serviceAccountName: horizon-operator-controller-manager terminationGracePeriodSeconds: 10 tolerations: - effect: NoExecute key: node.kubernetes.io/not-ready operator: Exists tolerationSeconds: 120 - effect: NoExecute key: node.kubernetes.io/unreachable operator: Exists tolerationSeconds: 120 - effect: NoSchedule key: node.kubernetes.io/memory-pressure operator: Exists volumes: - name: kube-api-access-4v444 projected: defaultMode: 420 sources: - serviceAccountToken: expirationSeconds: 3607 path: token - configMap: items: - key: ca.crt path: ca.crt name: kube-root-ca.crt - downwardAPI: items: - fieldRef: apiVersion: v1 fieldPath: metadata.namespace path: namespace - configMap: items: - key: service-ca.crt path: service-ca.crt name: openshift-service-ca.crt status: conditions: - lastProbeTime: null lastTransitionTime: '2025-12-13T06:59:25Z' status: 'True' type: PodReadyToStartContainers - lastProbeTime: null lastTransitionTime: '2025-12-13T06:59:15Z' status: 'True' type: Initialized - lastProbeTime: null lastTransitionTime: '2025-12-13T06:59:35Z' status: 'True' type: Ready - lastProbeTime: null lastTransitionTime: '2025-12-13T06:59:35Z' status: 'True' type: ContainersReady - lastProbeTime: null lastTransitionTime: '2025-12-13T06:59:15Z' status: 'True' type: PodScheduled containerStatuses: - containerID: cri-o://97a1943204e1c5f03f9a19813f41a3d9a29727af429a6837fb66f4c8873ea709 image: quay.io/openstack-k8s-operators/horizon-operator@sha256:9e847f4dbdea19ab997f32a02b3680a9bd966f9c705911645c3866a19fda9ea5 imageID: quay.io/openstack-k8s-operators/horizon-operator@sha256:9e847f4dbdea19ab997f32a02b3680a9bd966f9c705911645c3866a19fda9ea5 lastState: {} name: manager ready: true restartCount: 0 started: true state: running: startedAt: '2025-12-13T06:59:25Z' volumeMounts: - mountPath: /var/run/secrets/kubernetes.io/serviceaccount name: kube-api-access-4v444 readOnly: true recursiveReadOnly: Disabled hostIP: 192.168.126.11 hostIPs: - ip: 192.168.126.11 phase: Running podIP: 10.217.0.56 podIPs: - ip: 10.217.0.56 qosClass: Burstable startTime: '2025-12-13T06:59:15Z' - apiVersion: v1 kind: Pod metadata: annotations: k8s.ovn.org/pod-networks: '{"default":{"ip_addresses":["10.217.0.60/23"],"mac_address":"0a:58:0a:d9:00:3c","gateway_ips":["10.217.0.1"],"routes":[{"dest":"10.217.0.0/22","nextHop":"10.217.0.1"},{"dest":"10.217.4.0/23","nextHop":"10.217.0.1"},{"dest":"169.254.0.5/32","nextHop":"10.217.0.1"},{"dest":"100.64.0.0/16","nextHop":"10.217.0.1"}],"ip_address":"10.217.0.60/23","gateway_ip":"10.217.0.1","role":"primary"}}' k8s.v1.cni.cncf.io/network-status: |- [{ "name": "ovn-kubernetes", "interface": "eth0", "ips": [ "10.217.0.60" ], "mac": "0a:58:0a:d9:00:3c", "default": true, "dns": {} }] kubectl.kubernetes.io/default-container: manager openshift.io/scc: anyuid creationTimestamp: '2025-12-13T06:59:15Z' generateName: infra-operator-controller-manager-58944d7758- labels: app.kubernetes.io/name: infra-operator control-plane: controller-manager openstack.org/operator-name: infra pod-template-hash: 58944d7758 managedFields: - apiVersion: v1 fieldsType: FieldsV1 fieldsV1: f:metadata: f:annotations: f:k8s.ovn.org/pod-networks: {} manager: crc operation: Update subresource: status time: '2025-12-13T06:59:15Z' - apiVersion: v1 fieldsType: FieldsV1 fieldsV1: f:metadata: f:annotations: .: {} f:kubectl.kubernetes.io/default-container: {} f:generateName: {} f:labels: .: {} f:app.kubernetes.io/name: {} f:control-plane: {} f:openstack.org/operator-name: {} f:pod-template-hash: {} f:ownerReferences: .: {} k:{"uid":"aa461a6f-2670-4a3e-84d7-94afa0ef6d07"}: {} f:spec: f:containers: k:{"name":"manager"}: .: {} f:args: {} f:command: {} f:env: .: {} k:{"name":"ENABLE_WEBHOOKS"}: .: {} f:name: {} f:value: {} k:{"name":"LEASE_DURATION"}: .: {} f:name: {} f:value: {} k:{"name":"METRICS_CERTS"}: .: {} f:name: {} f:value: {} k:{"name":"RENEW_DEADLINE"}: .: {} f:name: {} f:value: {} k:{"name":"RETRY_PERIOD"}: .: {} f:name: {} f:value: {} f:image: {} f:imagePullPolicy: {} f:livenessProbe: .: {} f:failureThreshold: {} f:httpGet: .: {} f:path: {} f:port: {} f:scheme: {} f:initialDelaySeconds: {} f:periodSeconds: {} f:successThreshold: {} f:timeoutSeconds: {} f:name: {} f:readinessProbe: .: {} f:failureThreshold: {} f:httpGet: .: {} f:path: {} f:port: {} f:scheme: {} f:initialDelaySeconds: {} f:periodSeconds: {} f:successThreshold: {} f:timeoutSeconds: {} f:resources: .: {} f:limits: .: {} f:cpu: {} f:memory: {} f:requests: .: {} f:cpu: {} f:memory: {} f:securityContext: .: {} f:allowPrivilegeEscalation: {} f:terminationMessagePath: {} f:terminationMessagePolicy: {} f:volumeMounts: .: {} k:{"mountPath":"/tmp/k8s-webhook-server/serving-certs"}: .: {} f:mountPath: {} f:name: {} f:readOnly: {} f:dnsPolicy: {} f:enableServiceLinks: {} f:restartPolicy: {} f:schedulerName: {} f:securityContext: .: {} f:runAsNonRoot: {} f:serviceAccount: {} f:serviceAccountName: {} f:terminationGracePeriodSeconds: {} f:tolerations: {} f:volumes: .: {} k:{"name":"cert"}: .: {} f:name: {} f:secret: .: {} f:defaultMode: {} f:secretName: {} manager: kube-controller-manager operation: Update time: '2025-12-13T06:59:15Z' - apiVersion: v1 fieldsType: FieldsV1 fieldsV1: f:metadata: f:annotations: f:k8s.v1.cni.cncf.io/network-status: {} manager: multus-daemon operation: Update subresource: status time: '2025-12-13T06:59:32Z' - apiVersion: v1 fieldsType: FieldsV1 fieldsV1: f:status: f:conditions: k:{"type":"ContainersReady"}: .: {} f:lastProbeTime: {} f:lastTransitionTime: {} f:status: {} f:type: {} k:{"type":"Initialized"}: .: {} f:lastProbeTime: {} f:lastTransitionTime: {} f:status: {} f:type: {} k:{"type":"PodReadyToStartContainers"}: .: {} f:lastProbeTime: {} f:lastTransitionTime: {} f:status: {} f:type: {} k:{"type":"Ready"}: .: {} f:lastProbeTime: {} f:lastTransitionTime: {} f:status: {} f:type: {} f:containerStatuses: {} f:hostIP: {} f:hostIPs: {} f:phase: {} f:podIP: {} f:podIPs: .: {} k:{"ip":"10.217.0.60"}: .: {} f:ip: {} f:startTime: {} manager: kubelet operation: Update subresource: status time: '2025-12-13T06:59:51Z' name: infra-operator-controller-manager-58944d7758-4p77w namespace: openstack-operators ownerReferences: - apiVersion: apps/v1 blockOwnerDeletion: true controller: true kind: ReplicaSet name: infra-operator-controller-manager-58944d7758 uid: aa461a6f-2670-4a3e-84d7-94afa0ef6d07 resourceVersion: '36999' uid: b8c3ef08-66ae-474e-8204-2338afb7d08d spec: containers: - args: - --leader-elect - --health-probe-bind-address=:8081 - --metrics-bind-address=127.0.0.1:8080 command: - /manager env: - name: LEASE_DURATION value: '30' - name: RENEW_DEADLINE value: '20' - name: RETRY_PERIOD value: '5' - name: ENABLE_WEBHOOKS value: 'true' - name: METRICS_CERTS value: 'false' image: quay.io/openstack-k8s-operators/infra-operator@sha256:ccc60d56d8efc2e91a7d8a7131eb7e06c189c32247f2a819818c084ba2e2f2ab imagePullPolicy: IfNotPresent livenessProbe: failureThreshold: 3 httpGet: path: /healthz port: 8081 scheme: HTTP initialDelaySeconds: 15 periodSeconds: 20 successThreshold: 1 timeoutSeconds: 1 name: manager readinessProbe: failureThreshold: 3 httpGet: path: /readyz port: 8081 scheme: HTTP initialDelaySeconds: 5 periodSeconds: 10 successThreshold: 1 timeoutSeconds: 1 resources: limits: cpu: 600m memory: 2Gi requests: cpu: 10m memory: 512Mi securityContext: allowPrivilegeEscalation: false capabilities: drop: - MKNOD terminationMessagePath: /dev/termination-log terminationMessagePolicy: File volumeMounts: - mountPath: /tmp/k8s-webhook-server/serving-certs name: cert readOnly: true - mountPath: /var/run/secrets/kubernetes.io/serviceaccount name: kube-api-access-4zsdx readOnly: true dnsPolicy: ClusterFirst enableServiceLinks: true imagePullSecrets: - name: infra-operator-controller-manager-dockercfg-7lwpr nodeName: crc preemptionPolicy: PreemptLowerPriority priority: 0 restartPolicy: Always schedulerName: default-scheduler securityContext: runAsNonRoot: true seLinuxOptions: level: s0:c26,c5 serviceAccount: infra-operator-controller-manager serviceAccountName: infra-operator-controller-manager terminationGracePeriodSeconds: 10 tolerations: - effect: NoExecute key: node.kubernetes.io/not-ready operator: Exists tolerationSeconds: 120 - effect: NoExecute key: node.kubernetes.io/unreachable operator: Exists tolerationSeconds: 120 - effect: NoSchedule key: node.kubernetes.io/memory-pressure operator: Exists volumes: - name: cert secret: defaultMode: 420 secretName: infra-operator-webhook-server-cert - name: kube-api-access-4zsdx projected: defaultMode: 420 sources: - serviceAccountToken: expirationSeconds: 3607 path: token - configMap: items: - key: ca.crt path: ca.crt name: kube-root-ca.crt - downwardAPI: items: - fieldRef: apiVersion: v1 fieldPath: metadata.namespace path: namespace - configMap: items: - key: service-ca.crt path: service-ca.crt name: openshift-service-ca.crt status: conditions: - lastProbeTime: null lastTransitionTime: '2025-12-13T06:59:42Z' status: 'True' type: PodReadyToStartContainers - lastProbeTime: null lastTransitionTime: '2025-12-13T06:59:15Z' status: 'True' type: Initialized - lastProbeTime: null lastTransitionTime: '2025-12-13T06:59:51Z' status: 'True' type: Ready - lastProbeTime: null lastTransitionTime: '2025-12-13T06:59:51Z' status: 'True' type: ContainersReady - lastProbeTime: null lastTransitionTime: '2025-12-13T06:59:15Z' status: 'True' type: PodScheduled containerStatuses: - containerID: cri-o://39871fdc9a2126b6eda789a0191035aebf2823f05a593e4420dcf6ee57a9d4df image: quay.io/openstack-k8s-operators/infra-operator@sha256:ccc60d56d8efc2e91a7d8a7131eb7e06c189c32247f2a819818c084ba2e2f2ab imageID: quay.io/openstack-k8s-operators/infra-operator@sha256:ccc60d56d8efc2e91a7d8a7131eb7e06c189c32247f2a819818c084ba2e2f2ab lastState: {} name: manager ready: true restartCount: 0 started: true state: running: startedAt: '2025-12-13T06:59:42Z' volumeMounts: - mountPath: /tmp/k8s-webhook-server/serving-certs name: cert readOnly: true recursiveReadOnly: Disabled - mountPath: /var/run/secrets/kubernetes.io/serviceaccount name: kube-api-access-4zsdx readOnly: true recursiveReadOnly: Disabled hostIP: 192.168.126.11 hostIPs: - ip: 192.168.126.11 phase: Running podIP: 10.217.0.60 podIPs: - ip: 10.217.0.60 qosClass: Burstable startTime: '2025-12-13T06:59:15Z' - apiVersion: v1 kind: Pod metadata: annotations: k8s.ovn.org/pod-networks: '{"default":{"ip_addresses":["10.217.0.73/23"],"mac_address":"0a:58:0a:d9:00:49","gateway_ips":["10.217.0.1"],"routes":[{"dest":"10.217.0.0/22","nextHop":"10.217.0.1"},{"dest":"10.217.4.0/23","nextHop":"10.217.0.1"},{"dest":"169.254.0.5/32","nextHop":"10.217.0.1"},{"dest":"100.64.0.0/16","nextHop":"10.217.0.1"}],"ip_address":"10.217.0.73/23","gateway_ip":"10.217.0.1","role":"primary"}}' k8s.v1.cni.cncf.io/network-status: |- [{ "name": "ovn-kubernetes", "interface": "eth0", "ips": [ "10.217.0.73" ], "mac": "0a:58:0a:d9:00:49", "default": true, "dns": {} }] kubectl.kubernetes.io/default-container: manager openshift.io/scc: anyuid creationTimestamp: '2025-12-13T06:59:15Z' generateName: ironic-operator-controller-manager-f458558d7- labels: app.kubernetes.io/name: ironic-operator control-plane: controller-manager openstack.org/operator-name: ironic pod-template-hash: f458558d7 managedFields: - apiVersion: v1 fieldsType: FieldsV1 fieldsV1: f:metadata: f:annotations: f:k8s.ovn.org/pod-networks: {} manager: crc operation: Update subresource: status time: '2025-12-13T06:59:15Z' - apiVersion: v1 fieldsType: FieldsV1 fieldsV1: f:metadata: f:annotations: .: {} f:kubectl.kubernetes.io/default-container: {} f:generateName: {} f:labels: .: {} f:app.kubernetes.io/name: {} f:control-plane: {} f:openstack.org/operator-name: {} f:pod-template-hash: {} f:ownerReferences: .: {} k:{"uid":"0f092b3f-3122-4050-8b47-d847f7159ed0"}: {} f:spec: f:containers: k:{"name":"manager"}: .: {} f:args: {} f:command: {} f:env: .: {} k:{"name":"ENABLE_WEBHOOKS"}: .: {} f:name: {} f:value: {} k:{"name":"LEASE_DURATION"}: .: {} f:name: {} f:value: {} k:{"name":"METRICS_CERTS"}: .: {} f:name: {} f:value: {} k:{"name":"RENEW_DEADLINE"}: .: {} f:name: {} f:value: {} k:{"name":"RETRY_PERIOD"}: .: {} f:name: {} f:value: {} f:image: {} f:imagePullPolicy: {} f:livenessProbe: .: {} f:failureThreshold: {} f:httpGet: .: {} f:path: {} f:port: {} f:scheme: {} f:initialDelaySeconds: {} f:periodSeconds: {} f:successThreshold: {} f:timeoutSeconds: {} f:name: {} f:readinessProbe: .: {} f:failureThreshold: {} f:httpGet: .: {} f:path: {} f:port: {} f:scheme: {} f:initialDelaySeconds: {} f:periodSeconds: {} f:successThreshold: {} f:timeoutSeconds: {} f:resources: .: {} f:limits: .: {} f:cpu: {} f:memory: {} f:requests: .: {} f:cpu: {} f:memory: {} f:securityContext: .: {} f:allowPrivilegeEscalation: {} f:terminationMessagePath: {} f:terminationMessagePolicy: {} f:dnsPolicy: {} f:enableServiceLinks: {} f:restartPolicy: {} f:schedulerName: {} f:securityContext: .: {} f:runAsNonRoot: {} f:serviceAccount: {} f:serviceAccountName: {} f:terminationGracePeriodSeconds: {} f:tolerations: {} manager: kube-controller-manager operation: Update time: '2025-12-13T06:59:15Z' - apiVersion: v1 fieldsType: FieldsV1 fieldsV1: f:metadata: f:annotations: f:k8s.v1.cni.cncf.io/network-status: {} manager: multus-daemon operation: Update subresource: status time: '2025-12-13T06:59:16Z' - apiVersion: v1 fieldsType: FieldsV1 fieldsV1: f:status: f:conditions: k:{"type":"ContainersReady"}: .: {} f:lastProbeTime: {} f:lastTransitionTime: {} f:status: {} f:type: {} k:{"type":"Initialized"}: .: {} f:lastProbeTime: {} f:lastTransitionTime: {} f:status: {} f:type: {} k:{"type":"PodReadyToStartContainers"}: .: {} f:lastProbeTime: {} f:lastTransitionTime: {} f:status: {} f:type: {} k:{"type":"Ready"}: .: {} f:lastProbeTime: {} f:lastTransitionTime: {} f:status: {} f:type: {} f:containerStatuses: {} f:hostIP: {} f:hostIPs: {} f:phase: {} f:podIP: {} f:podIPs: .: {} k:{"ip":"10.217.0.73"}: .: {} f:ip: {} f:startTime: {} manager: kubelet operation: Update subresource: status time: '2025-12-13T06:59:35Z' name: ironic-operator-controller-manager-f458558d7-fhckm namespace: openstack-operators ownerReferences: - apiVersion: apps/v1 blockOwnerDeletion: true controller: true kind: ReplicaSet name: ironic-operator-controller-manager-f458558d7 uid: 0f092b3f-3122-4050-8b47-d847f7159ed0 resourceVersion: '36614' uid: ac41b645-ea22-42ac-846e-fa16d0beaee4 spec: containers: - args: - --leader-elect - --health-probe-bind-address=:8081 - --metrics-bind-address=127.0.0.1:8080 command: - /manager env: - name: LEASE_DURATION value: '30' - name: RENEW_DEADLINE value: '20' - name: RETRY_PERIOD value: '5' - name: ENABLE_WEBHOOKS value: 'false' - name: METRICS_CERTS value: 'false' image: quay.io/openstack-k8s-operators/ironic-operator@sha256:5bdb3685be3ddc1efd62e16aaf2fa96ead64315e26d52b1b2a7d8ac01baa1e87 imagePullPolicy: IfNotPresent livenessProbe: failureThreshold: 3 httpGet: path: /healthz port: 8081 scheme: HTTP initialDelaySeconds: 15 periodSeconds: 20 successThreshold: 1 timeoutSeconds: 1 name: manager readinessProbe: failureThreshold: 3 httpGet: path: /readyz port: 8081 scheme: HTTP initialDelaySeconds: 5 periodSeconds: 10 successThreshold: 1 timeoutSeconds: 1 resources: limits: cpu: 500m memory: 512Mi requests: cpu: 10m memory: 256Mi securityContext: allowPrivilegeEscalation: false capabilities: drop: - MKNOD terminationMessagePath: /dev/termination-log terminationMessagePolicy: File volumeMounts: - mountPath: /var/run/secrets/kubernetes.io/serviceaccount name: kube-api-access-25qj6 readOnly: true dnsPolicy: ClusterFirst enableServiceLinks: true imagePullSecrets: - name: ironic-operator-controller-manager-dockercfg-hs8sk nodeName: crc preemptionPolicy: PreemptLowerPriority priority: 0 restartPolicy: Always schedulerName: default-scheduler securityContext: runAsNonRoot: true seLinuxOptions: level: s0:c26,c5 serviceAccount: ironic-operator-controller-manager serviceAccountName: ironic-operator-controller-manager terminationGracePeriodSeconds: 10 tolerations: - effect: NoExecute key: node.kubernetes.io/not-ready operator: Exists tolerationSeconds: 120 - effect: NoExecute key: node.kubernetes.io/unreachable operator: Exists tolerationSeconds: 120 - effect: NoSchedule key: node.kubernetes.io/memory-pressure operator: Exists volumes: - name: kube-api-access-25qj6 projected: defaultMode: 420 sources: - serviceAccountToken: expirationSeconds: 3607 path: token - configMap: items: - key: ca.crt path: ca.crt name: kube-root-ca.crt - downwardAPI: items: - fieldRef: apiVersion: v1 fieldPath: metadata.namespace path: namespace - configMap: items: - key: service-ca.crt path: service-ca.crt name: openshift-service-ca.crt status: conditions: - lastProbeTime: null lastTransitionTime: '2025-12-13T06:59:25Z' status: 'True' type: PodReadyToStartContainers - lastProbeTime: null lastTransitionTime: '2025-12-13T06:59:15Z' status: 'True' type: Initialized - lastProbeTime: null lastTransitionTime: '2025-12-13T06:59:35Z' status: 'True' type: Ready - lastProbeTime: null lastTransitionTime: '2025-12-13T06:59:35Z' status: 'True' type: ContainersReady - lastProbeTime: null lastTransitionTime: '2025-12-13T06:59:15Z' status: 'True' type: PodScheduled containerStatuses: - containerID: cri-o://c7cef122f44d965fc0a7cf57b8795f7344fefca1d605a4fa78a71fe2459f1b16 image: quay.io/openstack-k8s-operators/ironic-operator@sha256:5bdb3685be3ddc1efd62e16aaf2fa96ead64315e26d52b1b2a7d8ac01baa1e87 imageID: quay.io/openstack-k8s-operators/ironic-operator@sha256:5bdb3685be3ddc1efd62e16aaf2fa96ead64315e26d52b1b2a7d8ac01baa1e87 lastState: {} name: manager ready: true restartCount: 0 started: true state: running: startedAt: '2025-12-13T06:59:25Z' volumeMounts: - mountPath: /var/run/secrets/kubernetes.io/serviceaccount name: kube-api-access-25qj6 readOnly: true recursiveReadOnly: Disabled hostIP: 192.168.126.11 hostIPs: - ip: 192.168.126.11 phase: Running podIP: 10.217.0.73 podIPs: - ip: 10.217.0.73 qosClass: Burstable startTime: '2025-12-13T06:59:15Z' - apiVersion: v1 kind: Pod metadata: annotations: k8s.ovn.org/pod-networks: '{"default":{"ip_addresses":["10.217.0.74/23"],"mac_address":"0a:58:0a:d9:00:4a","gateway_ips":["10.217.0.1"],"routes":[{"dest":"10.217.0.0/22","nextHop":"10.217.0.1"},{"dest":"10.217.4.0/23","nextHop":"10.217.0.1"},{"dest":"169.254.0.5/32","nextHop":"10.217.0.1"},{"dest":"100.64.0.0/16","nextHop":"10.217.0.1"}],"ip_address":"10.217.0.74/23","gateway_ip":"10.217.0.1","role":"primary"}}' k8s.v1.cni.cncf.io/network-status: |- [{ "name": "ovn-kubernetes", "interface": "eth0", "ips": [ "10.217.0.74" ], "mac": "0a:58:0a:d9:00:4a", "default": true, "dns": {} }] kubectl.kubernetes.io/default-container: manager openshift.io/scc: anyuid creationTimestamp: '2025-12-13T06:59:15Z' generateName: keystone-operator-controller-manager-5c7cbf548f- labels: app.kubernetes.io/name: keystone-operator control-plane: controller-manager openstack.org/operator-name: keystone pod-template-hash: 5c7cbf548f managedFields: - apiVersion: v1 fieldsType: FieldsV1 fieldsV1: f:metadata: f:annotations: f:k8s.ovn.org/pod-networks: {} manager: crc operation: Update subresource: status time: '2025-12-13T06:59:15Z' - apiVersion: v1 fieldsType: FieldsV1 fieldsV1: f:metadata: f:annotations: .: {} f:kubectl.kubernetes.io/default-container: {} f:generateName: {} f:labels: .: {} f:app.kubernetes.io/name: {} f:control-plane: {} f:openstack.org/operator-name: {} f:pod-template-hash: {} f:ownerReferences: .: {} k:{"uid":"babb9de3-7bc2-406f-9f0d-fb6cda6c44d8"}: {} f:spec: f:containers: k:{"name":"manager"}: .: {} f:args: {} f:command: {} f:env: .: {} k:{"name":"ENABLE_WEBHOOKS"}: .: {} f:name: {} f:value: {} k:{"name":"LEASE_DURATION"}: .: {} f:name: {} f:value: {} k:{"name":"METRICS_CERTS"}: .: {} f:name: {} f:value: {} k:{"name":"RENEW_DEADLINE"}: .: {} f:name: {} f:value: {} k:{"name":"RETRY_PERIOD"}: .: {} f:name: {} f:value: {} f:image: {} f:imagePullPolicy: {} f:livenessProbe: .: {} f:failureThreshold: {} f:httpGet: .: {} f:path: {} f:port: {} f:scheme: {} f:initialDelaySeconds: {} f:periodSeconds: {} f:successThreshold: {} f:timeoutSeconds: {} f:name: {} f:readinessProbe: .: {} f:failureThreshold: {} f:httpGet: .: {} f:path: {} f:port: {} f:scheme: {} f:initialDelaySeconds: {} f:periodSeconds: {} f:successThreshold: {} f:timeoutSeconds: {} f:resources: .: {} f:limits: .: {} f:cpu: {} f:memory: {} f:requests: .: {} f:cpu: {} f:memory: {} f:securityContext: .: {} f:allowPrivilegeEscalation: {} f:terminationMessagePath: {} f:terminationMessagePolicy: {} f:dnsPolicy: {} f:enableServiceLinks: {} f:restartPolicy: {} f:schedulerName: {} f:securityContext: .: {} f:runAsNonRoot: {} f:serviceAccount: {} f:serviceAccountName: {} f:terminationGracePeriodSeconds: {} f:tolerations: {} manager: kube-controller-manager operation: Update time: '2025-12-13T06:59:15Z' - apiVersion: v1 fieldsType: FieldsV1 fieldsV1: f:metadata: f:annotations: f:k8s.v1.cni.cncf.io/network-status: {} manager: multus-daemon operation: Update subresource: status time: '2025-12-13T06:59:16Z' - apiVersion: v1 fieldsType: FieldsV1 fieldsV1: f:status: f:conditions: k:{"type":"ContainersReady"}: .: {} f:lastProbeTime: {} f:lastTransitionTime: {} f:status: {} f:type: {} k:{"type":"Initialized"}: .: {} f:lastProbeTime: {} f:lastTransitionTime: {} f:status: {} f:type: {} k:{"type":"PodReadyToStartContainers"}: .: {} f:lastProbeTime: {} f:lastTransitionTime: {} f:status: {} f:type: {} k:{"type":"Ready"}: .: {} f:lastProbeTime: {} f:lastTransitionTime: {} f:status: {} f:type: {} f:containerStatuses: {} f:hostIP: {} f:hostIPs: {} f:phase: {} f:podIP: {} f:podIPs: .: {} k:{"ip":"10.217.0.74"}: .: {} f:ip: {} f:startTime: {} manager: kubelet operation: Update subresource: status time: '2025-12-13T06:59:55Z' name: keystone-operator-controller-manager-5c7cbf548f-jfdpn namespace: openstack-operators ownerReferences: - apiVersion: apps/v1 blockOwnerDeletion: true controller: true kind: ReplicaSet name: keystone-operator-controller-manager-5c7cbf548f uid: babb9de3-7bc2-406f-9f0d-fb6cda6c44d8 resourceVersion: '37051' uid: 5e5582c5-50c3-4c4f-9693-16f2a71543ce spec: containers: - args: - --leader-elect - --health-probe-bind-address=:8081 - --metrics-bind-address=127.0.0.1:8080 command: - /manager env: - name: LEASE_DURATION value: '30' - name: RENEW_DEADLINE value: '20' - name: RETRY_PERIOD value: '5' - name: ENABLE_WEBHOOKS value: 'false' - name: METRICS_CERTS value: 'false' image: quay.io/openstack-k8s-operators/keystone-operator@sha256:72ad6517987f674af0d0ae092cbb874aeae909c8b8b60188099c311762ebc8f7 imagePullPolicy: IfNotPresent livenessProbe: failureThreshold: 3 httpGet: path: /healthz port: 8081 scheme: HTTP initialDelaySeconds: 15 periodSeconds: 20 successThreshold: 1 timeoutSeconds: 1 name: manager readinessProbe: failureThreshold: 3 httpGet: path: /readyz port: 8081 scheme: HTTP initialDelaySeconds: 5 periodSeconds: 10 successThreshold: 1 timeoutSeconds: 1 resources: limits: cpu: 500m memory: 512Mi requests: cpu: 10m memory: 256Mi securityContext: allowPrivilegeEscalation: false capabilities: drop: - MKNOD terminationMessagePath: /dev/termination-log terminationMessagePolicy: File volumeMounts: - mountPath: /var/run/secrets/kubernetes.io/serviceaccount name: kube-api-access-pvdml readOnly: true dnsPolicy: ClusterFirst enableServiceLinks: true imagePullSecrets: - name: keystone-operator-controller-manager-dockercfg-t72bx nodeName: crc preemptionPolicy: PreemptLowerPriority priority: 0 restartPolicy: Always schedulerName: default-scheduler securityContext: runAsNonRoot: true seLinuxOptions: level: s0:c26,c5 serviceAccount: keystone-operator-controller-manager serviceAccountName: keystone-operator-controller-manager terminationGracePeriodSeconds: 10 tolerations: - effect: NoExecute key: node.kubernetes.io/not-ready operator: Exists tolerationSeconds: 120 - effect: NoExecute key: node.kubernetes.io/unreachable operator: Exists tolerationSeconds: 120 - effect: NoSchedule key: node.kubernetes.io/memory-pressure operator: Exists volumes: - name: kube-api-access-pvdml projected: defaultMode: 420 sources: - serviceAccountToken: expirationSeconds: 3607 path: token - configMap: items: - key: ca.crt path: ca.crt name: kube-root-ca.crt - downwardAPI: items: - fieldRef: apiVersion: v1 fieldPath: metadata.namespace path: namespace - configMap: items: - key: service-ca.crt path: service-ca.crt name: openshift-service-ca.crt status: conditions: - lastProbeTime: null lastTransitionTime: '2025-12-13T06:59:17Z' status: 'True' type: PodReadyToStartContainers - lastProbeTime: null lastTransitionTime: '2025-12-13T06:59:15Z' status: 'True' type: Initialized - lastProbeTime: null lastTransitionTime: '2025-12-13T06:59:55Z' status: 'True' type: Ready - lastProbeTime: null lastTransitionTime: '2025-12-13T06:59:55Z' status: 'True' type: ContainersReady - lastProbeTime: null lastTransitionTime: '2025-12-13T06:59:15Z' status: 'True' type: PodScheduled containerStatuses: - containerID: cri-o://3147d8f7299eda63caca789664b051158acf5919b00e82d0bbef8af76973d60c image: quay.io/openstack-k8s-operators/keystone-operator@sha256:72ad6517987f674af0d0ae092cbb874aeae909c8b8b60188099c311762ebc8f7 imageID: quay.io/openstack-k8s-operators/keystone-operator@sha256:72ad6517987f674af0d0ae092cbb874aeae909c8b8b60188099c311762ebc8f7 lastState: {} name: manager ready: true restartCount: 0 started: true state: running: startedAt: '2025-12-13T06:59:42Z' volumeMounts: - mountPath: /var/run/secrets/kubernetes.io/serviceaccount name: kube-api-access-pvdml readOnly: true recursiveReadOnly: Disabled hostIP: 192.168.126.11 hostIPs: - ip: 192.168.126.11 phase: Running podIP: 10.217.0.74 podIPs: - ip: 10.217.0.74 qosClass: Burstable startTime: '2025-12-13T06:59:15Z' - apiVersion: v1 kind: Pod metadata: annotations: k8s.ovn.org/pod-networks: '{"default":{"ip_addresses":["10.217.0.75/23"],"mac_address":"0a:58:0a:d9:00:4b","gateway_ips":["10.217.0.1"],"routes":[{"dest":"10.217.0.0/22","nextHop":"10.217.0.1"},{"dest":"10.217.4.0/23","nextHop":"10.217.0.1"},{"dest":"169.254.0.5/32","nextHop":"10.217.0.1"},{"dest":"100.64.0.0/16","nextHop":"10.217.0.1"}],"ip_address":"10.217.0.75/23","gateway_ip":"10.217.0.1","role":"primary"}}' k8s.v1.cni.cncf.io/network-status: |- [{ "name": "ovn-kubernetes", "interface": "eth0", "ips": [ "10.217.0.75" ], "mac": "0a:58:0a:d9:00:4b", "default": true, "dns": {} }] kubectl.kubernetes.io/default-container: manager openshift.io/scc: anyuid creationTimestamp: '2025-12-13T06:59:15Z' generateName: manila-operator-controller-manager-5fdd9786f7- labels: app.kubernetes.io/name: manila-operator control-plane: controller-manager openstack.org/operator-name: manila pod-template-hash: 5fdd9786f7 managedFields: - apiVersion: v1 fieldsType: FieldsV1 fieldsV1: f:metadata: f:annotations: f:k8s.ovn.org/pod-networks: {} manager: crc operation: Update subresource: status time: '2025-12-13T06:59:15Z' - apiVersion: v1 fieldsType: FieldsV1 fieldsV1: f:metadata: f:annotations: .: {} f:kubectl.kubernetes.io/default-container: {} f:generateName: {} f:labels: .: {} f:app.kubernetes.io/name: {} f:control-plane: {} f:openstack.org/operator-name: {} f:pod-template-hash: {} f:ownerReferences: .: {} k:{"uid":"853126cc-a34e-4565-a2de-e3e80bf44977"}: {} f:spec: f:containers: k:{"name":"manager"}: .: {} f:args: {} f:command: {} f:env: .: {} k:{"name":"ENABLE_WEBHOOKS"}: .: {} f:name: {} f:value: {} k:{"name":"LEASE_DURATION"}: .: {} f:name: {} f:value: {} k:{"name":"METRICS_CERTS"}: .: {} f:name: {} f:value: {} k:{"name":"RENEW_DEADLINE"}: .: {} f:name: {} f:value: {} k:{"name":"RETRY_PERIOD"}: .: {} f:name: {} f:value: {} f:image: {} f:imagePullPolicy: {} f:livenessProbe: .: {} f:failureThreshold: {} f:httpGet: .: {} f:path: {} f:port: {} f:scheme: {} f:initialDelaySeconds: {} f:periodSeconds: {} f:successThreshold: {} f:timeoutSeconds: {} f:name: {} f:readinessProbe: .: {} f:failureThreshold: {} f:httpGet: .: {} f:path: {} f:port: {} f:scheme: {} f:initialDelaySeconds: {} f:periodSeconds: {} f:successThreshold: {} f:timeoutSeconds: {} f:resources: .: {} f:limits: .: {} f:cpu: {} f:memory: {} f:requests: .: {} f:cpu: {} f:memory: {} f:securityContext: .: {} f:allowPrivilegeEscalation: {} f:terminationMessagePath: {} f:terminationMessagePolicy: {} f:dnsPolicy: {} f:enableServiceLinks: {} f:restartPolicy: {} f:schedulerName: {} f:securityContext: .: {} f:runAsNonRoot: {} f:serviceAccount: {} f:serviceAccountName: {} f:terminationGracePeriodSeconds: {} f:tolerations: {} manager: kube-controller-manager operation: Update time: '2025-12-13T06:59:15Z' - apiVersion: v1 fieldsType: FieldsV1 fieldsV1: f:metadata: f:annotations: f:k8s.v1.cni.cncf.io/network-status: {} manager: multus-daemon operation: Update subresource: status time: '2025-12-13T06:59:16Z' - apiVersion: v1 fieldsType: FieldsV1 fieldsV1: f:status: f:conditions: k:{"type":"ContainersReady"}: .: {} f:lastProbeTime: {} f:lastTransitionTime: {} f:status: {} f:type: {} k:{"type":"Initialized"}: .: {} f:lastProbeTime: {} f:lastTransitionTime: {} f:status: {} f:type: {} k:{"type":"PodReadyToStartContainers"}: .: {} f:lastProbeTime: {} f:lastTransitionTime: {} f:status: {} f:type: {} k:{"type":"Ready"}: .: {} f:lastProbeTime: {} f:lastTransitionTime: {} f:status: {} f:type: {} f:containerStatuses: {} f:hostIP: {} f:hostIPs: {} f:phase: {} f:podIP: {} f:podIPs: .: {} k:{"ip":"10.217.0.75"}: .: {} f:ip: {} f:startTime: {} manager: kubelet operation: Update subresource: status time: '2025-12-13T06:59:55Z' name: manila-operator-controller-manager-5fdd9786f7-58rgg namespace: openstack-operators ownerReferences: - apiVersion: apps/v1 blockOwnerDeletion: true controller: true kind: ReplicaSet name: manila-operator-controller-manager-5fdd9786f7 uid: 853126cc-a34e-4565-a2de-e3e80bf44977 resourceVersion: '37057' uid: 42b2a1fb-b5d1-46ff-932e-d831b53febf7 spec: containers: - args: - --leader-elect - --health-probe-bind-address=:8081 - --metrics-bind-address=127.0.0.1:8080 command: - /manager env: - name: LEASE_DURATION value: '30' - name: RENEW_DEADLINE value: '20' - name: RETRY_PERIOD value: '5' - name: ENABLE_WEBHOOKS value: 'false' - name: METRICS_CERTS value: 'false' image: quay.io/openstack-k8s-operators/manila-operator@sha256:44126f9c6b1d2bf752ddf989e20a4fc4cc1c07723d4fcb78465ccb2f55da6b3a imagePullPolicy: IfNotPresent livenessProbe: failureThreshold: 3 httpGet: path: /healthz port: 8081 scheme: HTTP initialDelaySeconds: 15 periodSeconds: 20 successThreshold: 1 timeoutSeconds: 1 name: manager readinessProbe: failureThreshold: 3 httpGet: path: /readyz port: 8081 scheme: HTTP initialDelaySeconds: 5 periodSeconds: 10 successThreshold: 1 timeoutSeconds: 1 resources: limits: cpu: 500m memory: 512Mi requests: cpu: 10m memory: 256Mi securityContext: allowPrivilegeEscalation: false capabilities: drop: - MKNOD terminationMessagePath: /dev/termination-log terminationMessagePolicy: File volumeMounts: - mountPath: /var/run/secrets/kubernetes.io/serviceaccount name: kube-api-access-x8lms readOnly: true dnsPolicy: ClusterFirst enableServiceLinks: true imagePullSecrets: - name: manila-operator-controller-manager-dockercfg-6cc58 nodeName: crc preemptionPolicy: PreemptLowerPriority priority: 0 restartPolicy: Always schedulerName: default-scheduler securityContext: runAsNonRoot: true seLinuxOptions: level: s0:c26,c5 serviceAccount: manila-operator-controller-manager serviceAccountName: manila-operator-controller-manager terminationGracePeriodSeconds: 10 tolerations: - effect: NoExecute key: node.kubernetes.io/not-ready operator: Exists tolerationSeconds: 120 - effect: NoExecute key: node.kubernetes.io/unreachable operator: Exists tolerationSeconds: 120 - effect: NoSchedule key: node.kubernetes.io/memory-pressure operator: Exists volumes: - name: kube-api-access-x8lms projected: defaultMode: 420 sources: - serviceAccountToken: expirationSeconds: 3607 path: token - configMap: items: - key: ca.crt path: ca.crt name: kube-root-ca.crt - downwardAPI: items: - fieldRef: apiVersion: v1 fieldPath: metadata.namespace path: namespace - configMap: items: - key: service-ca.crt path: service-ca.crt name: openshift-service-ca.crt status: conditions: - lastProbeTime: null lastTransitionTime: '2025-12-13T06:59:17Z' status: 'True' type: PodReadyToStartContainers - lastProbeTime: null lastTransitionTime: '2025-12-13T06:59:15Z' status: 'True' type: Initialized - lastProbeTime: null lastTransitionTime: '2025-12-13T06:59:55Z' status: 'True' type: Ready - lastProbeTime: null lastTransitionTime: '2025-12-13T06:59:55Z' status: 'True' type: ContainersReady - lastProbeTime: null lastTransitionTime: '2025-12-13T06:59:15Z' status: 'True' type: PodScheduled containerStatuses: - containerID: cri-o://a398e0b29af30c43915f0ade66cb2c4cfe77c32b07ed6cae47430d8f631cccb6 image: quay.io/openstack-k8s-operators/manila-operator@sha256:44126f9c6b1d2bf752ddf989e20a4fc4cc1c07723d4fcb78465ccb2f55da6b3a imageID: quay.io/openstack-k8s-operators/manila-operator@sha256:44126f9c6b1d2bf752ddf989e20a4fc4cc1c07723d4fcb78465ccb2f55da6b3a lastState: {} name: manager ready: true restartCount: 0 started: true state: running: startedAt: '2025-12-13T06:59:42Z' volumeMounts: - mountPath: /var/run/secrets/kubernetes.io/serviceaccount name: kube-api-access-x8lms readOnly: true recursiveReadOnly: Disabled hostIP: 192.168.126.11 hostIPs: - ip: 192.168.126.11 phase: Running podIP: 10.217.0.75 podIPs: - ip: 10.217.0.75 qosClass: Burstable startTime: '2025-12-13T06:59:15Z' - apiVersion: v1 kind: Pod metadata: annotations: k8s.ovn.org/pod-networks: '{"default":{"ip_addresses":["10.217.0.76/23"],"mac_address":"0a:58:0a:d9:00:4c","gateway_ips":["10.217.0.1"],"routes":[{"dest":"10.217.0.0/22","nextHop":"10.217.0.1"},{"dest":"10.217.4.0/23","nextHop":"10.217.0.1"},{"dest":"169.254.0.5/32","nextHop":"10.217.0.1"},{"dest":"100.64.0.0/16","nextHop":"10.217.0.1"}],"ip_address":"10.217.0.76/23","gateway_ip":"10.217.0.1","role":"primary"}}' k8s.v1.cni.cncf.io/network-status: |- [{ "name": "ovn-kubernetes", "interface": "eth0", "ips": [ "10.217.0.76" ], "mac": "0a:58:0a:d9:00:4c", "default": true, "dns": {} }] kubectl.kubernetes.io/default-container: manager openshift.io/scc: anyuid creationTimestamp: '2025-12-13T06:59:15Z' generateName: mariadb-operator-controller-manager-f76f4954c- labels: app.kubernetes.io/name: mariadb-operator control-plane: controller-manager openstack.org/operator-name: mariadb pod-template-hash: f76f4954c managedFields: - apiVersion: v1 fieldsType: FieldsV1 fieldsV1: f:metadata: f:annotations: f:k8s.ovn.org/pod-networks: {} manager: crc operation: Update subresource: status time: '2025-12-13T06:59:15Z' - apiVersion: v1 fieldsType: FieldsV1 fieldsV1: f:metadata: f:annotations: .: {} f:kubectl.kubernetes.io/default-container: {} f:generateName: {} f:labels: .: {} f:app.kubernetes.io/name: {} f:control-plane: {} f:openstack.org/operator-name: {} f:pod-template-hash: {} f:ownerReferences: .: {} k:{"uid":"01f94e5d-10ca-47a0-ac09-c7b8c330d005"}: {} f:spec: f:containers: k:{"name":"manager"}: .: {} f:args: {} f:command: {} f:env: .: {} k:{"name":"ENABLE_WEBHOOKS"}: .: {} f:name: {} f:value: {} k:{"name":"LEASE_DURATION"}: .: {} f:name: {} f:value: {} k:{"name":"METRICS_CERTS"}: .: {} f:name: {} f:value: {} k:{"name":"RENEW_DEADLINE"}: .: {} f:name: {} f:value: {} k:{"name":"RETRY_PERIOD"}: .: {} f:name: {} f:value: {} f:image: {} f:imagePullPolicy: {} f:livenessProbe: .: {} f:failureThreshold: {} f:httpGet: .: {} f:path: {} f:port: {} f:scheme: {} f:initialDelaySeconds: {} f:periodSeconds: {} f:successThreshold: {} f:timeoutSeconds: {} f:name: {} f:readinessProbe: .: {} f:failureThreshold: {} f:httpGet: .: {} f:path: {} f:port: {} f:scheme: {} f:initialDelaySeconds: {} f:periodSeconds: {} f:successThreshold: {} f:timeoutSeconds: {} f:resources: .: {} f:limits: .: {} f:cpu: {} f:memory: {} f:requests: .: {} f:cpu: {} f:memory: {} f:securityContext: .: {} f:allowPrivilegeEscalation: {} f:terminationMessagePath: {} f:terminationMessagePolicy: {} f:dnsPolicy: {} f:enableServiceLinks: {} f:restartPolicy: {} f:schedulerName: {} f:securityContext: .: {} f:runAsNonRoot: {} f:serviceAccount: {} f:serviceAccountName: {} f:terminationGracePeriodSeconds: {} f:tolerations: {} manager: kube-controller-manager operation: Update time: '2025-12-13T06:59:15Z' - apiVersion: v1 fieldsType: FieldsV1 fieldsV1: f:metadata: f:annotations: f:k8s.v1.cni.cncf.io/network-status: {} manager: multus-daemon operation: Update subresource: status time: '2025-12-13T06:59:16Z' - apiVersion: v1 fieldsType: FieldsV1 fieldsV1: f:status: f:conditions: k:{"type":"ContainersReady"}: .: {} f:lastProbeTime: {} f:lastTransitionTime: {} f:status: {} f:type: {} k:{"type":"Initialized"}: .: {} f:lastProbeTime: {} f:lastTransitionTime: {} f:status: {} f:type: {} k:{"type":"PodReadyToStartContainers"}: .: {} f:lastProbeTime: {} f:lastTransitionTime: {} f:status: {} f:type: {} k:{"type":"Ready"}: .: {} f:lastProbeTime: {} f:lastTransitionTime: {} f:status: {} f:type: {} f:containerStatuses: {} f:hostIP: {} f:hostIPs: {} f:phase: {} f:podIP: {} f:podIPs: .: {} k:{"ip":"10.217.0.76"}: .: {} f:ip: {} f:startTime: {} manager: kubelet operation: Update subresource: status time: '2025-12-13T06:59:35Z' name: mariadb-operator-controller-manager-f76f4954c-cmcbp namespace: openstack-operators ownerReferences: - apiVersion: apps/v1 blockOwnerDeletion: true controller: true kind: ReplicaSet name: mariadb-operator-controller-manager-f76f4954c uid: 01f94e5d-10ca-47a0-ac09-c7b8c330d005 resourceVersion: '36626' uid: 90ea237e-4f56-4008-a2df-d3c404424374 spec: containers: - args: - --leader-elect - --health-probe-bind-address=:8081 - --metrics-bind-address=127.0.0.1:8080 command: - /manager env: - name: LEASE_DURATION value: '30' - name: RENEW_DEADLINE value: '20' - name: RETRY_PERIOD value: '5' - name: ENABLE_WEBHOOKS value: 'false' - name: METRICS_CERTS value: 'false' image: quay.io/openstack-k8s-operators/mariadb-operator@sha256:424da951f13f1fbe9083215dc9f5088f90676dd813f01fdf3c1a8639b61cbaad imagePullPolicy: IfNotPresent livenessProbe: failureThreshold: 3 httpGet: path: /healthz port: 8081 scheme: HTTP initialDelaySeconds: 15 periodSeconds: 20 successThreshold: 1 timeoutSeconds: 1 name: manager readinessProbe: failureThreshold: 3 httpGet: path: /readyz port: 8081 scheme: HTTP initialDelaySeconds: 5 periodSeconds: 10 successThreshold: 1 timeoutSeconds: 1 resources: limits: cpu: 500m memory: 512Mi requests: cpu: 10m memory: 256Mi securityContext: allowPrivilegeEscalation: false capabilities: drop: - MKNOD terminationMessagePath: /dev/termination-log terminationMessagePolicy: File volumeMounts: - mountPath: /var/run/secrets/kubernetes.io/serviceaccount name: kube-api-access-72fz9 readOnly: true dnsPolicy: ClusterFirst enableServiceLinks: true imagePullSecrets: - name: mariadb-operator-controller-manager-dockercfg-crnvv nodeName: crc preemptionPolicy: PreemptLowerPriority priority: 0 restartPolicy: Always schedulerName: default-scheduler securityContext: runAsNonRoot: true seLinuxOptions: level: s0:c26,c5 serviceAccount: mariadb-operator-controller-manager serviceAccountName: mariadb-operator-controller-manager terminationGracePeriodSeconds: 10 tolerations: - effect: NoExecute key: node.kubernetes.io/not-ready operator: Exists tolerationSeconds: 120 - effect: NoExecute key: node.kubernetes.io/unreachable operator: Exists tolerationSeconds: 120 - effect: NoSchedule key: node.kubernetes.io/memory-pressure operator: Exists volumes: - name: kube-api-access-72fz9 projected: defaultMode: 420 sources: - serviceAccountToken: expirationSeconds: 3607 path: token - configMap: items: - key: ca.crt path: ca.crt name: kube-root-ca.crt - downwardAPI: items: - fieldRef: apiVersion: v1 fieldPath: metadata.namespace path: namespace - configMap: items: - key: service-ca.crt path: service-ca.crt name: openshift-service-ca.crt status: conditions: - lastProbeTime: null lastTransitionTime: '2025-12-13T06:59:25Z' status: 'True' type: PodReadyToStartContainers - lastProbeTime: null lastTransitionTime: '2025-12-13T06:59:15Z' status: 'True' type: Initialized - lastProbeTime: null lastTransitionTime: '2025-12-13T06:59:35Z' status: 'True' type: Ready - lastProbeTime: null lastTransitionTime: '2025-12-13T06:59:35Z' status: 'True' type: ContainersReady - lastProbeTime: null lastTransitionTime: '2025-12-13T06:59:15Z' status: 'True' type: PodScheduled containerStatuses: - containerID: cri-o://12bac91531d1f2723cb2b14b04eac7fcc6d447c3036a962ad0bdc78b887f47bf image: quay.io/openstack-k8s-operators/mariadb-operator@sha256:424da951f13f1fbe9083215dc9f5088f90676dd813f01fdf3c1a8639b61cbaad imageID: quay.io/openstack-k8s-operators/mariadb-operator@sha256:424da951f13f1fbe9083215dc9f5088f90676dd813f01fdf3c1a8639b61cbaad lastState: {} name: manager ready: true restartCount: 0 started: true state: running: startedAt: '2025-12-13T06:59:25Z' volumeMounts: - mountPath: /var/run/secrets/kubernetes.io/serviceaccount name: kube-api-access-72fz9 readOnly: true recursiveReadOnly: Disabled hostIP: 192.168.126.11 hostIPs: - ip: 192.168.126.11 phase: Running podIP: 10.217.0.76 podIPs: - ip: 10.217.0.76 qosClass: Burstable startTime: '2025-12-13T06:59:15Z' - apiVersion: v1 kind: Pod metadata: annotations: k8s.ovn.org/pod-networks: '{"default":{"ip_addresses":["10.217.0.77/23"],"mac_address":"0a:58:0a:d9:00:4d","gateway_ips":["10.217.0.1"],"routes":[{"dest":"10.217.0.0/22","nextHop":"10.217.0.1"},{"dest":"10.217.4.0/23","nextHop":"10.217.0.1"},{"dest":"169.254.0.5/32","nextHop":"10.217.0.1"},{"dest":"100.64.0.0/16","nextHop":"10.217.0.1"}],"ip_address":"10.217.0.77/23","gateway_ip":"10.217.0.1","role":"primary"}}' k8s.v1.cni.cncf.io/network-status: |- [{ "name": "ovn-kubernetes", "interface": "eth0", "ips": [ "10.217.0.77" ], "mac": "0a:58:0a:d9:00:4d", "default": true, "dns": {} }] kubectl.kubernetes.io/default-container: manager openshift.io/scc: anyuid creationTimestamp: '2025-12-13T06:59:15Z' generateName: neutron-operator-controller-manager-7cd87b778f- labels: app.kubernetes.io/name: neutron-operator control-plane: controller-manager openstack.org/operator-name: neutron pod-template-hash: 7cd87b778f managedFields: - apiVersion: v1 fieldsType: FieldsV1 fieldsV1: f:metadata: f:annotations: f:k8s.ovn.org/pod-networks: {} manager: crc operation: Update subresource: status time: '2025-12-13T06:59:15Z' - apiVersion: v1 fieldsType: FieldsV1 fieldsV1: f:metadata: f:annotations: .: {} f:kubectl.kubernetes.io/default-container: {} f:generateName: {} f:labels: .: {} f:app.kubernetes.io/name: {} f:control-plane: {} f:openstack.org/operator-name: {} f:pod-template-hash: {} f:ownerReferences: .: {} k:{"uid":"b63d779b-f24c-40d0-92f5-edce9a211dda"}: {} f:spec: f:containers: k:{"name":"manager"}: .: {} f:args: {} f:command: {} f:env: .: {} k:{"name":"ENABLE_WEBHOOKS"}: .: {} f:name: {} f:value: {} k:{"name":"LEASE_DURATION"}: .: {} f:name: {} f:value: {} k:{"name":"METRICS_CERTS"}: .: {} f:name: {} f:value: {} k:{"name":"RENEW_DEADLINE"}: .: {} f:name: {} f:value: {} k:{"name":"RETRY_PERIOD"}: .: {} f:name: {} f:value: {} f:image: {} f:imagePullPolicy: {} f:livenessProbe: .: {} f:failureThreshold: {} f:httpGet: .: {} f:path: {} f:port: {} f:scheme: {} f:initialDelaySeconds: {} f:periodSeconds: {} f:successThreshold: {} f:timeoutSeconds: {} f:name: {} f:readinessProbe: .: {} f:failureThreshold: {} f:httpGet: .: {} f:path: {} f:port: {} f:scheme: {} f:initialDelaySeconds: {} f:periodSeconds: {} f:successThreshold: {} f:timeoutSeconds: {} f:resources: .: {} f:limits: .: {} f:cpu: {} f:memory: {} f:requests: .: {} f:cpu: {} f:memory: {} f:securityContext: .: {} f:allowPrivilegeEscalation: {} f:terminationMessagePath: {} f:terminationMessagePolicy: {} f:dnsPolicy: {} f:enableServiceLinks: {} f:restartPolicy: {} f:schedulerName: {} f:securityContext: .: {} f:runAsNonRoot: {} f:serviceAccount: {} f:serviceAccountName: {} f:terminationGracePeriodSeconds: {} f:tolerations: {} manager: kube-controller-manager operation: Update time: '2025-12-13T06:59:15Z' - apiVersion: v1 fieldsType: FieldsV1 fieldsV1: f:metadata: f:annotations: f:k8s.v1.cni.cncf.io/network-status: {} manager: multus-daemon operation: Update subresource: status time: '2025-12-13T06:59:16Z' - apiVersion: v1 fieldsType: FieldsV1 fieldsV1: f:status: f:conditions: k:{"type":"ContainersReady"}: .: {} f:lastProbeTime: {} f:lastTransitionTime: {} f:status: {} f:type: {} k:{"type":"Initialized"}: .: {} f:lastProbeTime: {} f:lastTransitionTime: {} f:status: {} f:type: {} k:{"type":"PodReadyToStartContainers"}: .: {} f:lastProbeTime: {} f:lastTransitionTime: {} f:status: {} f:type: {} k:{"type":"Ready"}: .: {} f:lastProbeTime: {} f:lastTransitionTime: {} f:status: {} f:type: {} f:containerStatuses: {} f:hostIP: {} f:hostIPs: {} f:phase: {} f:podIP: {} f:podIPs: .: {} k:{"ip":"10.217.0.77"}: .: {} f:ip: {} f:startTime: {} manager: kubelet operation: Update subresource: status time: '2025-12-13T06:59:55Z' name: neutron-operator-controller-manager-7cd87b778f-vgdnd namespace: openstack-operators ownerReferences: - apiVersion: apps/v1 blockOwnerDeletion: true controller: true kind: ReplicaSet name: neutron-operator-controller-manager-7cd87b778f uid: b63d779b-f24c-40d0-92f5-edce9a211dda resourceVersion: '37062' uid: d143ef34-f1db-411d-941b-c229888e22b2 spec: containers: - args: - --leader-elect - --health-probe-bind-address=:8081 - --metrics-bind-address=127.0.0.1:8080 command: - /manager env: - name: LEASE_DURATION value: '30' - name: RENEW_DEADLINE value: '20' - name: RETRY_PERIOD value: '5' - name: ENABLE_WEBHOOKS value: 'false' - name: METRICS_CERTS value: 'false' image: quay.io/openstack-k8s-operators/neutron-operator@sha256:0b3fb69f35c151895d3dffd514974a9f9fe1c77c3bca69b78b81efb183cf4557 imagePullPolicy: IfNotPresent livenessProbe: failureThreshold: 3 httpGet: path: /healthz port: 8081 scheme: HTTP initialDelaySeconds: 15 periodSeconds: 20 successThreshold: 1 timeoutSeconds: 1 name: manager readinessProbe: failureThreshold: 3 httpGet: path: /readyz port: 8081 scheme: HTTP initialDelaySeconds: 5 periodSeconds: 10 successThreshold: 1 timeoutSeconds: 1 resources: limits: cpu: 500m memory: 512Mi requests: cpu: 10m memory: 256Mi securityContext: allowPrivilegeEscalation: false capabilities: drop: - MKNOD terminationMessagePath: /dev/termination-log terminationMessagePolicy: File volumeMounts: - mountPath: /var/run/secrets/kubernetes.io/serviceaccount name: kube-api-access-vh9gc readOnly: true dnsPolicy: ClusterFirst enableServiceLinks: true imagePullSecrets: - name: neutron-operator-controller-manager-dockercfg-ghtqd nodeName: crc preemptionPolicy: PreemptLowerPriority priority: 0 restartPolicy: Always schedulerName: default-scheduler securityContext: runAsNonRoot: true seLinuxOptions: level: s0:c26,c5 serviceAccount: neutron-operator-controller-manager serviceAccountName: neutron-operator-controller-manager terminationGracePeriodSeconds: 10 tolerations: - effect: NoExecute key: node.kubernetes.io/not-ready operator: Exists tolerationSeconds: 120 - effect: NoExecute key: node.kubernetes.io/unreachable operator: Exists tolerationSeconds: 120 - effect: NoSchedule key: node.kubernetes.io/memory-pressure operator: Exists volumes: - name: kube-api-access-vh9gc projected: defaultMode: 420 sources: - serviceAccountToken: expirationSeconds: 3607 path: token - configMap: items: - key: ca.crt path: ca.crt name: kube-root-ca.crt - downwardAPI: items: - fieldRef: apiVersion: v1 fieldPath: metadata.namespace path: namespace - configMap: items: - key: service-ca.crt path: service-ca.crt name: openshift-service-ca.crt status: conditions: - lastProbeTime: null lastTransitionTime: '2025-12-13T06:59:17Z' status: 'True' type: PodReadyToStartContainers - lastProbeTime: null lastTransitionTime: '2025-12-13T06:59:15Z' status: 'True' type: Initialized - lastProbeTime: null lastTransitionTime: '2025-12-13T06:59:55Z' status: 'True' type: Ready - lastProbeTime: null lastTransitionTime: '2025-12-13T06:59:55Z' status: 'True' type: ContainersReady - lastProbeTime: null lastTransitionTime: '2025-12-13T06:59:15Z' status: 'True' type: PodScheduled containerStatuses: - containerID: cri-o://8faf2e0aeb1754e7fb6daad2a658d5a40ef444545653af9d25bf1066752265ae image: quay.io/openstack-k8s-operators/neutron-operator@sha256:0b3fb69f35c151895d3dffd514974a9f9fe1c77c3bca69b78b81efb183cf4557 imageID: quay.io/openstack-k8s-operators/neutron-operator@sha256:0b3fb69f35c151895d3dffd514974a9f9fe1c77c3bca69b78b81efb183cf4557 lastState: {} name: manager ready: true restartCount: 0 started: true state: running: startedAt: '2025-12-13T06:59:42Z' volumeMounts: - mountPath: /var/run/secrets/kubernetes.io/serviceaccount name: kube-api-access-vh9gc readOnly: true recursiveReadOnly: Disabled hostIP: 192.168.126.11 hostIPs: - ip: 192.168.126.11 phase: Running podIP: 10.217.0.77 podIPs: - ip: 10.217.0.77 qosClass: Burstable startTime: '2025-12-13T06:59:15Z' - apiVersion: v1 kind: Pod metadata: annotations: k8s.ovn.org/pod-networks: '{"default":{"ip_addresses":["10.217.0.78/23"],"mac_address":"0a:58:0a:d9:00:4e","gateway_ips":["10.217.0.1"],"routes":[{"dest":"10.217.0.0/22","nextHop":"10.217.0.1"},{"dest":"10.217.4.0/23","nextHop":"10.217.0.1"},{"dest":"169.254.0.5/32","nextHop":"10.217.0.1"},{"dest":"100.64.0.0/16","nextHop":"10.217.0.1"}],"ip_address":"10.217.0.78/23","gateway_ip":"10.217.0.1","role":"primary"}}' k8s.v1.cni.cncf.io/network-status: |- [{ "name": "ovn-kubernetes", "interface": "eth0", "ips": [ "10.217.0.78" ], "mac": "0a:58:0a:d9:00:4e", "default": true, "dns": {} }] kubectl.kubernetes.io/default-container: manager openshift.io/scc: anyuid creationTimestamp: '2025-12-13T06:59:15Z' generateName: nova-operator-controller-manager-5fbbf8b6cc- labels: app.kubernetes.io/name: nova-operator control-plane: controller-manager openstack.org/operator-name: nova pod-template-hash: 5fbbf8b6cc managedFields: - apiVersion: v1 fieldsType: FieldsV1 fieldsV1: f:metadata: f:annotations: f:k8s.ovn.org/pod-networks: {} manager: crc operation: Update subresource: status time: '2025-12-13T06:59:15Z' - apiVersion: v1 fieldsType: FieldsV1 fieldsV1: f:metadata: f:annotations: .: {} f:kubectl.kubernetes.io/default-container: {} f:generateName: {} f:labels: .: {} f:app.kubernetes.io/name: {} f:control-plane: {} f:openstack.org/operator-name: {} f:pod-template-hash: {} f:ownerReferences: .: {} k:{"uid":"c037df60-a9ab-44b2-ab97-ac3916c0ee5b"}: {} f:spec: f:containers: k:{"name":"manager"}: .: {} f:args: {} f:command: {} f:env: .: {} k:{"name":"ENABLE_WEBHOOKS"}: .: {} f:name: {} f:value: {} k:{"name":"LEASE_DURATION"}: .: {} f:name: {} f:value: {} k:{"name":"METRICS_CERTS"}: .: {} f:name: {} f:value: {} k:{"name":"RENEW_DEADLINE"}: .: {} f:name: {} f:value: {} k:{"name":"RETRY_PERIOD"}: .: {} f:name: {} f:value: {} f:image: {} f:imagePullPolicy: {} f:livenessProbe: .: {} f:failureThreshold: {} f:httpGet: .: {} f:path: {} f:port: {} f:scheme: {} f:initialDelaySeconds: {} f:periodSeconds: {} f:successThreshold: {} f:timeoutSeconds: {} f:name: {} f:readinessProbe: .: {} f:failureThreshold: {} f:httpGet: .: {} f:path: {} f:port: {} f:scheme: {} f:initialDelaySeconds: {} f:periodSeconds: {} f:successThreshold: {} f:timeoutSeconds: {} f:resources: .: {} f:limits: .: {} f:cpu: {} f:memory: {} f:requests: .: {} f:cpu: {} f:memory: {} f:securityContext: .: {} f:allowPrivilegeEscalation: {} f:terminationMessagePath: {} f:terminationMessagePolicy: {} f:dnsPolicy: {} f:enableServiceLinks: {} f:restartPolicy: {} f:schedulerName: {} f:securityContext: .: {} f:runAsNonRoot: {} f:serviceAccount: {} f:serviceAccountName: {} f:terminationGracePeriodSeconds: {} f:tolerations: {} manager: kube-controller-manager operation: Update time: '2025-12-13T06:59:15Z' - apiVersion: v1 fieldsType: FieldsV1 fieldsV1: f:metadata: f:annotations: f:k8s.v1.cni.cncf.io/network-status: {} manager: multus-daemon operation: Update subresource: status time: '2025-12-13T06:59:16Z' - apiVersion: v1 fieldsType: FieldsV1 fieldsV1: f:status: f:conditions: k:{"type":"ContainersReady"}: .: {} f:lastProbeTime: {} f:lastTransitionTime: {} f:status: {} f:type: {} k:{"type":"Initialized"}: .: {} f:lastProbeTime: {} f:lastTransitionTime: {} f:status: {} f:type: {} k:{"type":"PodReadyToStartContainers"}: .: {} f:lastProbeTime: {} f:lastTransitionTime: {} f:status: {} f:type: {} k:{"type":"Ready"}: .: {} f:lastProbeTime: {} f:lastTransitionTime: {} f:status: {} f:type: {} f:containerStatuses: {} f:hostIP: {} f:hostIPs: {} f:phase: {} f:podIP: {} f:podIPs: .: {} k:{"ip":"10.217.0.78"}: .: {} f:ip: {} f:startTime: {} manager: kubelet operation: Update subresource: status time: '2025-12-13T06:59:36Z' name: nova-operator-controller-manager-5fbbf8b6cc-464zb namespace: openstack-operators ownerReferences: - apiVersion: apps/v1 blockOwnerDeletion: true controller: true kind: ReplicaSet name: nova-operator-controller-manager-5fbbf8b6cc uid: c037df60-a9ab-44b2-ab97-ac3916c0ee5b resourceVersion: '36667' uid: 08cf1d52-d8b7-477f-92c7-1dd2732ff9e3 spec: containers: - args: - --leader-elect - --health-probe-bind-address=:8081 - --metrics-bind-address=127.0.0.1:8080 command: - /manager env: - name: LEASE_DURATION value: '30' - name: RENEW_DEADLINE value: '20' - name: RETRY_PERIOD value: '5' - name: ENABLE_WEBHOOKS value: 'false' - name: METRICS_CERTS value: 'false' image: quay.io/openstack-k8s-operators/nova-operator@sha256:779f0cee6024d0fb8f259b036fe790e62aa5a3b0431ea9bf15a6e7d02e2e5670 imagePullPolicy: IfNotPresent livenessProbe: failureThreshold: 3 httpGet: path: /healthz port: 8081 scheme: HTTP initialDelaySeconds: 15 periodSeconds: 20 successThreshold: 1 timeoutSeconds: 1 name: manager readinessProbe: failureThreshold: 3 httpGet: path: /readyz port: 8081 scheme: HTTP initialDelaySeconds: 5 periodSeconds: 10 successThreshold: 1 timeoutSeconds: 1 resources: limits: cpu: 500m memory: 512Mi requests: cpu: 10m memory: 256Mi securityContext: allowPrivilegeEscalation: false capabilities: drop: - MKNOD terminationMessagePath: /dev/termination-log terminationMessagePolicy: File volumeMounts: - mountPath: /var/run/secrets/kubernetes.io/serviceaccount name: kube-api-access-qjq7g readOnly: true dnsPolicy: ClusterFirst enableServiceLinks: true imagePullSecrets: - name: nova-operator-controller-manager-dockercfg-s8d8k nodeName: crc preemptionPolicy: PreemptLowerPriority priority: 0 restartPolicy: Always schedulerName: default-scheduler securityContext: runAsNonRoot: true seLinuxOptions: level: s0:c26,c5 serviceAccount: nova-operator-controller-manager serviceAccountName: nova-operator-controller-manager terminationGracePeriodSeconds: 10 tolerations: - effect: NoExecute key: node.kubernetes.io/not-ready operator: Exists tolerationSeconds: 120 - effect: NoExecute key: node.kubernetes.io/unreachable operator: Exists tolerationSeconds: 120 - effect: NoSchedule key: node.kubernetes.io/memory-pressure operator: Exists volumes: - name: kube-api-access-qjq7g projected: defaultMode: 420 sources: - serviceAccountToken: expirationSeconds: 3607 path: token - configMap: items: - key: ca.crt path: ca.crt name: kube-root-ca.crt - downwardAPI: items: - fieldRef: apiVersion: v1 fieldPath: metadata.namespace path: namespace - configMap: items: - key: service-ca.crt path: service-ca.crt name: openshift-service-ca.crt status: conditions: - lastProbeTime: null lastTransitionTime: '2025-12-13T06:59:25Z' status: 'True' type: PodReadyToStartContainers - lastProbeTime: null lastTransitionTime: '2025-12-13T06:59:15Z' status: 'True' type: Initialized - lastProbeTime: null lastTransitionTime: '2025-12-13T06:59:36Z' status: 'True' type: Ready - lastProbeTime: null lastTransitionTime: '2025-12-13T06:59:36Z' status: 'True' type: ContainersReady - lastProbeTime: null lastTransitionTime: '2025-12-13T06:59:15Z' status: 'True' type: PodScheduled containerStatuses: - containerID: cri-o://4b72ae11abb04a812db0a0ea9b6a848534e11bb10954fb94d1141426a60637c3 image: quay.io/openstack-k8s-operators/nova-operator@sha256:779f0cee6024d0fb8f259b036fe790e62aa5a3b0431ea9bf15a6e7d02e2e5670 imageID: quay.io/openstack-k8s-operators/nova-operator@sha256:779f0cee6024d0fb8f259b036fe790e62aa5a3b0431ea9bf15a6e7d02e2e5670 lastState: {} name: manager ready: true restartCount: 0 started: true state: running: startedAt: '2025-12-13T06:59:25Z' volumeMounts: - mountPath: /var/run/secrets/kubernetes.io/serviceaccount name: kube-api-access-qjq7g readOnly: true recursiveReadOnly: Disabled hostIP: 192.168.126.11 hostIPs: - ip: 192.168.126.11 phase: Running podIP: 10.217.0.78 podIPs: - ip: 10.217.0.78 qosClass: Burstable startTime: '2025-12-13T06:59:15Z' - apiVersion: v1 kind: Pod metadata: annotations: k8s.ovn.org/pod-networks: '{"default":{"ip_addresses":["10.217.0.79/23"],"mac_address":"0a:58:0a:d9:00:4f","gateway_ips":["10.217.0.1"],"routes":[{"dest":"10.217.0.0/22","nextHop":"10.217.0.1"},{"dest":"10.217.4.0/23","nextHop":"10.217.0.1"},{"dest":"169.254.0.5/32","nextHop":"10.217.0.1"},{"dest":"100.64.0.0/16","nextHop":"10.217.0.1"}],"ip_address":"10.217.0.79/23","gateway_ip":"10.217.0.1","role":"primary"}}' k8s.v1.cni.cncf.io/network-status: |- [{ "name": "ovn-kubernetes", "interface": "eth0", "ips": [ "10.217.0.79" ], "mac": "0a:58:0a:d9:00:4f", "default": true, "dns": {} }] kubectl.kubernetes.io/default-container: manager openshift.io/scc: anyuid creationTimestamp: '2025-12-13T06:59:15Z' generateName: octavia-operator-controller-manager-68c649d9d- labels: app.kubernetes.io/name: octavia-operator control-plane: controller-manager openstack.org/operator-name: octavia pod-template-hash: 68c649d9d managedFields: - apiVersion: v1 fieldsType: FieldsV1 fieldsV1: f:metadata: f:annotations: f:k8s.ovn.org/pod-networks: {} manager: crc operation: Update subresource: status time: '2025-12-13T06:59:15Z' - apiVersion: v1 fieldsType: FieldsV1 fieldsV1: f:metadata: f:annotations: .: {} f:kubectl.kubernetes.io/default-container: {} f:generateName: {} f:labels: .: {} f:app.kubernetes.io/name: {} f:control-plane: {} f:openstack.org/operator-name: {} f:pod-template-hash: {} f:ownerReferences: .: {} k:{"uid":"7cf45960-b46b-44d4-a7b2-635876397dd1"}: {} f:spec: f:containers: k:{"name":"manager"}: .: {} f:args: {} f:command: {} f:env: .: {} k:{"name":"ENABLE_WEBHOOKS"}: .: {} f:name: {} f:value: {} k:{"name":"LEASE_DURATION"}: .: {} f:name: {} f:value: {} k:{"name":"METRICS_CERTS"}: .: {} f:name: {} f:value: {} k:{"name":"RENEW_DEADLINE"}: .: {} f:name: {} f:value: {} k:{"name":"RETRY_PERIOD"}: .: {} f:name: {} f:value: {} f:image: {} f:imagePullPolicy: {} f:livenessProbe: .: {} f:failureThreshold: {} f:httpGet: .: {} f:path: {} f:port: {} f:scheme: {} f:initialDelaySeconds: {} f:periodSeconds: {} f:successThreshold: {} f:timeoutSeconds: {} f:name: {} f:readinessProbe: .: {} f:failureThreshold: {} f:httpGet: .: {} f:path: {} f:port: {} f:scheme: {} f:initialDelaySeconds: {} f:periodSeconds: {} f:successThreshold: {} f:timeoutSeconds: {} f:resources: .: {} f:limits: .: {} f:cpu: {} f:memory: {} f:requests: .: {} f:cpu: {} f:memory: {} f:securityContext: .: {} f:allowPrivilegeEscalation: {} f:terminationMessagePath: {} f:terminationMessagePolicy: {} f:dnsPolicy: {} f:enableServiceLinks: {} f:restartPolicy: {} f:schedulerName: {} f:securityContext: .: {} f:runAsNonRoot: {} f:serviceAccount: {} f:serviceAccountName: {} f:terminationGracePeriodSeconds: {} f:tolerations: {} manager: kube-controller-manager operation: Update time: '2025-12-13T06:59:15Z' - apiVersion: v1 fieldsType: FieldsV1 fieldsV1: f:metadata: f:annotations: f:k8s.v1.cni.cncf.io/network-status: {} manager: multus-daemon operation: Update subresource: status time: '2025-12-13T06:59:16Z' - apiVersion: v1 fieldsType: FieldsV1 fieldsV1: f:status: f:conditions: k:{"type":"ContainersReady"}: .: {} f:lastProbeTime: {} f:lastTransitionTime: {} f:status: {} f:type: {} k:{"type":"Initialized"}: .: {} f:lastProbeTime: {} f:lastTransitionTime: {} f:status: {} f:type: {} k:{"type":"PodReadyToStartContainers"}: .: {} f:lastProbeTime: {} f:lastTransitionTime: {} f:status: {} f:type: {} k:{"type":"Ready"}: .: {} f:lastProbeTime: {} f:lastTransitionTime: {} f:status: {} f:type: {} f:containerStatuses: {} f:hostIP: {} f:hostIPs: {} f:phase: {} f:podIP: {} f:podIPs: .: {} k:{"ip":"10.217.0.79"}: .: {} f:ip: {} f:startTime: {} manager: kubelet operation: Update subresource: status time: '2025-12-13T06:59:35Z' name: octavia-operator-controller-manager-68c649d9d-k2fqd namespace: openstack-operators ownerReferences: - apiVersion: apps/v1 blockOwnerDeletion: true controller: true kind: ReplicaSet name: octavia-operator-controller-manager-68c649d9d uid: 7cf45960-b46b-44d4-a7b2-635876397dd1 resourceVersion: '36652' uid: d86f3cba-c9ef-47eb-b04e-8f10ac1b0734 spec: containers: - args: - --leader-elect - --health-probe-bind-address=:8081 - --metrics-bind-address=127.0.0.1:8080 command: - /manager env: - name: LEASE_DURATION value: '30' - name: RENEW_DEADLINE value: '20' - name: RETRY_PERIOD value: '5' - name: ENABLE_WEBHOOKS value: 'false' - name: METRICS_CERTS value: 'false' image: quay.io/openstack-k8s-operators/octavia-operator@sha256:d9a3694865a7d54ee96397add18c3898886e98d079aa20876a0f4de1fa7a7168 imagePullPolicy: IfNotPresent livenessProbe: failureThreshold: 3 httpGet: path: /healthz port: 8081 scheme: HTTP initialDelaySeconds: 15 periodSeconds: 20 successThreshold: 1 timeoutSeconds: 1 name: manager readinessProbe: failureThreshold: 3 httpGet: path: /readyz port: 8081 scheme: HTTP initialDelaySeconds: 5 periodSeconds: 10 successThreshold: 1 timeoutSeconds: 1 resources: limits: cpu: 500m memory: 512Mi requests: cpu: 10m memory: 256Mi securityContext: allowPrivilegeEscalation: false capabilities: drop: - MKNOD terminationMessagePath: /dev/termination-log terminationMessagePolicy: File volumeMounts: - mountPath: /var/run/secrets/kubernetes.io/serviceaccount name: kube-api-access-4h6wp readOnly: true dnsPolicy: ClusterFirst enableServiceLinks: true imagePullSecrets: - name: octavia-operator-controller-manager-dockercfg-xprfr nodeName: crc preemptionPolicy: PreemptLowerPriority priority: 0 restartPolicy: Always schedulerName: default-scheduler securityContext: runAsNonRoot: true seLinuxOptions: level: s0:c26,c5 serviceAccount: octavia-operator-controller-manager serviceAccountName: octavia-operator-controller-manager terminationGracePeriodSeconds: 10 tolerations: - effect: NoExecute key: node.kubernetes.io/not-ready operator: Exists tolerationSeconds: 120 - effect: NoExecute key: node.kubernetes.io/unreachable operator: Exists tolerationSeconds: 120 - effect: NoSchedule key: node.kubernetes.io/memory-pressure operator: Exists volumes: - name: kube-api-access-4h6wp projected: defaultMode: 420 sources: - serviceAccountToken: expirationSeconds: 3607 path: token - configMap: items: - key: ca.crt path: ca.crt name: kube-root-ca.crt - downwardAPI: items: - fieldRef: apiVersion: v1 fieldPath: metadata.namespace path: namespace - configMap: items: - key: service-ca.crt path: service-ca.crt name: openshift-service-ca.crt status: conditions: - lastProbeTime: null lastTransitionTime: '2025-12-13T06:59:25Z' status: 'True' type: PodReadyToStartContainers - lastProbeTime: null lastTransitionTime: '2025-12-13T06:59:15Z' status: 'True' type: Initialized - lastProbeTime: null lastTransitionTime: '2025-12-13T06:59:35Z' status: 'True' type: Ready - lastProbeTime: null lastTransitionTime: '2025-12-13T06:59:35Z' status: 'True' type: ContainersReady - lastProbeTime: null lastTransitionTime: '2025-12-13T06:59:15Z' status: 'True' type: PodScheduled containerStatuses: - containerID: cri-o://57fbb8d3f0b79ffb3cd4d658695c011e30350bd7f0a4134e19034a9744e44eb6 image: quay.io/openstack-k8s-operators/octavia-operator@sha256:d9a3694865a7d54ee96397add18c3898886e98d079aa20876a0f4de1fa7a7168 imageID: quay.io/openstack-k8s-operators/octavia-operator@sha256:d9a3694865a7d54ee96397add18c3898886e98d079aa20876a0f4de1fa7a7168 lastState: {} name: manager ready: true restartCount: 0 started: true state: running: startedAt: '2025-12-13T06:59:25Z' volumeMounts: - mountPath: /var/run/secrets/kubernetes.io/serviceaccount name: kube-api-access-4h6wp readOnly: true recursiveReadOnly: Disabled hostIP: 192.168.126.11 hostIPs: - ip: 192.168.126.11 phase: Running podIP: 10.217.0.79 podIPs: - ip: 10.217.0.79 qosClass: Burstable startTime: '2025-12-13T06:59:15Z' - apiVersion: v1 kind: Pod metadata: annotations: k8s.ovn.org/pod-networks: '{"default":{"ip_addresses":["10.217.0.80/23"],"mac_address":"0a:58:0a:d9:00:50","gateway_ips":["10.217.0.1"],"routes":[{"dest":"10.217.0.0/22","nextHop":"10.217.0.1"},{"dest":"10.217.4.0/23","nextHop":"10.217.0.1"},{"dest":"169.254.0.5/32","nextHop":"10.217.0.1"},{"dest":"100.64.0.0/16","nextHop":"10.217.0.1"}],"ip_address":"10.217.0.80/23","gateway_ip":"10.217.0.1","role":"primary"}}' k8s.v1.cni.cncf.io/network-status: |- [{ "name": "ovn-kubernetes", "interface": "eth0", "ips": [ "10.217.0.80" ], "mac": "0a:58:0a:d9:00:50", "default": true, "dns": {} }] kubectl.kubernetes.io/default-container: manager openshift.io/scc: restricted-v2 seccomp.security.alpha.kubernetes.io/pod: runtime/default creationTimestamp: '2025-12-13T06:59:15Z' generateName: openstack-baremetal-operator-controller-manager-689f887b54- labels: app.kubernetes.io/name: openstack-baremetal-operator control-plane: controller-manager openstack.org/operator-name: openstack-baremetal pod-template-hash: 689f887b54 managedFields: - apiVersion: v1 fieldsType: FieldsV1 fieldsV1: f:metadata: f:annotations: f:k8s.ovn.org/pod-networks: {} manager: crc operation: Update subresource: status time: '2025-12-13T06:59:15Z' - apiVersion: v1 fieldsType: FieldsV1 fieldsV1: f:metadata: f:annotations: .: {} f:kubectl.kubernetes.io/default-container: {} f:generateName: {} f:labels: .: {} f:app.kubernetes.io/name: {} f:control-plane: {} f:openstack.org/operator-name: {} f:pod-template-hash: {} f:ownerReferences: .: {} k:{"uid":"16a4fa1d-d0bb-4efe-a27c-d3270e4569eb"}: {} f:spec: f:containers: k:{"name":"manager"}: .: {} f:args: {} f:command: {} f:env: .: {} k:{"name":"ENABLE_WEBHOOKS"}: .: {} f:name: {} f:value: {} k:{"name":"LEASE_DURATION"}: .: {} f:name: {} f:value: {} k:{"name":"METRICS_CERTS"}: .: {} f:name: {} f:value: {} k:{"name":"RELATED_IMAGE_AGENT_IMAGE_URL_DEFAULT"}: .: {} f:name: {} f:value: {} k:{"name":"RELATED_IMAGE_ANSIBLEEE_IMAGE_URL_DEFAULT"}: .: {} f:name: {} f:value: {} k:{"name":"RELATED_IMAGE_AODH_API_IMAGE_URL_DEFAULT"}: .: {} f:name: {} f:value: {} k:{"name":"RELATED_IMAGE_AODH_EVALUATOR_IMAGE_URL_DEFAULT"}: .: {} f:name: {} f:value: {} k:{"name":"RELATED_IMAGE_AODH_LISTENER_IMAGE_URL_DEFAULT"}: .: {} f:name: {} f:value: {} k:{"name":"RELATED_IMAGE_AODH_NOTIFIER_IMAGE_URL_DEFAULT"}: .: {} f:name: {} f:value: {} k:{"name":"RELATED_IMAGE_APACHE_IMAGE_URL_DEFAULT"}: .: {} f:name: {} f:value: {} k:{"name":"RELATED_IMAGE_BARBICAN_API_IMAGE_URL_DEFAULT"}: .: {} f:name: {} f:value: {} k:{"name":"RELATED_IMAGE_BARBICAN_KEYSTONE_LISTENER_IMAGE_URL_DEFAULT"}: .: {} f:name: {} f:value: {} k:{"name":"RELATED_IMAGE_BARBICAN_WORKER_IMAGE_URL_DEFAULT"}: .: {} f:name: {} f:value: {} k:{"name":"RELATED_IMAGE_CEILOMETER_CENTRAL_IMAGE_URL_DEFAULT"}: .: {} f:name: {} f:value: {} k:{"name":"RELATED_IMAGE_CEILOMETER_COMPUTE_IMAGE_URL_DEFAULT"}: .: {} f:name: {} f:value: {} k:{"name":"RELATED_IMAGE_CEILOMETER_IPMI_IMAGE_URL_DEFAULT"}: .: {} f:name: {} f:value: {} k:{"name":"RELATED_IMAGE_CEILOMETER_MYSQLD_EXPORTER_IMAGE_URL_DEFAULT"}: .: {} f:name: {} f:value: {} k:{"name":"RELATED_IMAGE_CEILOMETER_NOTIFICATION_IMAGE_URL_DEFAULT"}: .: {} f:name: {} f:value: {} k:{"name":"RELATED_IMAGE_CEILOMETER_SGCORE_IMAGE_URL_DEFAULT"}: .: {} f:name: {} f:value: {} k:{"name":"RELATED_IMAGE_CINDER_API_IMAGE_URL_DEFAULT"}: .: {} f:name: {} f:value: {} k:{"name":"RELATED_IMAGE_CINDER_BACKUP_IMAGE_URL_DEFAULT"}: .: {} f:name: {} f:value: {} k:{"name":"RELATED_IMAGE_CINDER_SCHEDULER_IMAGE_URL_DEFAULT"}: .: {} f:name: {} f:value: {} k:{"name":"RELATED_IMAGE_CINDER_VOLUME_IMAGE_URL_DEFAULT"}: .: {} f:name: {} f:value: {} k:{"name":"RELATED_IMAGE_CLOUDKITTY_API_IMAGE_URL_DEFAULT"}: .: {} f:name: {} f:value: {} k:{"name":"RELATED_IMAGE_CLOUDKITTY_PROC_IMAGE_URL_DEFAULT"}: .: {} f:name: {} f:value: {} k:{"name":"RELATED_IMAGE_DESIGNATE_API_IMAGE_URL_DEFAULT"}: .: {} f:name: {} f:value: {} k:{"name":"RELATED_IMAGE_DESIGNATE_BACKENDBIND9_IMAGE_URL_DEFAULT"}: .: {} f:name: {} f:value: {} k:{"name":"RELATED_IMAGE_DESIGNATE_CENTRAL_IMAGE_URL_DEFAULT"}: .: {} f:name: {} f:value: {} k:{"name":"RELATED_IMAGE_DESIGNATE_MDNS_IMAGE_URL_DEFAULT"}: .: {} f:name: {} f:value: {} k:{"name":"RELATED_IMAGE_DESIGNATE_PRODUCER_IMAGE_URL_DEFAULT"}: .: {} f:name: {} f:value: {} k:{"name":"RELATED_IMAGE_DESIGNATE_UNBOUND_IMAGE_URL_DEFAULT"}: .: {} f:name: {} f:value: {} k:{"name":"RELATED_IMAGE_DESIGNATE_WORKER_IMAGE_URL_DEFAULT"}: .: {} f:name: {} f:value: {} k:{"name":"RELATED_IMAGE_EDPM_FRR_IMAGE_URL_DEFAULT"}: .: {} f:name: {} f:value: {} k:{"name":"RELATED_IMAGE_EDPM_ISCSID_IMAGE_URL_DEFAULT"}: .: {} f:name: {} f:value: {} k:{"name":"RELATED_IMAGE_EDPM_KEPLER_IMAGE_URL_DEFAULT"}: .: {} f:name: {} f:value: {} k:{"name":"RELATED_IMAGE_EDPM_LOGROTATE_CROND_IMAGE_URL_DEFAULT"}: .: {} f:name: {} f:value: {} k:{"name":"RELATED_IMAGE_EDPM_MULTIPATHD_IMAGE_URL_DEFAULT"}: .: {} f:name: {} f:value: {} k:{"name":"RELATED_IMAGE_EDPM_NEUTRON_DHCP_AGENT_IMAGE_URL_DEFAULT"}: .: {} f:name: {} f:value: {} k:{"name":"RELATED_IMAGE_EDPM_NEUTRON_METADATA_AGENT_IMAGE_URL_DEFAULT"}: .: {} f:name: {} f:value: {} k:{"name":"RELATED_IMAGE_EDPM_NEUTRON_OVN_AGENT_IMAGE_URL_DEFAULT"}: .: {} f:name: {} f:value: {} k:{"name":"RELATED_IMAGE_EDPM_NEUTRON_SRIOV_AGENT_IMAGE_URL_DEFAULT"}: .: {} f:name: {} f:value: {} k:{"name":"RELATED_IMAGE_EDPM_NODE_EXPORTER_IMAGE_URL_DEFAULT"}: .: {} f:name: {} f:value: {} k:{"name":"RELATED_IMAGE_EDPM_OVN_BGP_AGENT_IMAGE_URL_DEFAULT"}: .: {} f:name: {} f:value: {} k:{"name":"RELATED_IMAGE_EDPM_PODMAN_EXPORTER_IMAGE_URL_DEFAULT"}: .: {} f:name: {} f:value: {} k:{"name":"RELATED_IMAGE_GLANCE_API_IMAGE_URL_DEFAULT"}: .: {} f:name: {} f:value: {} k:{"name":"RELATED_IMAGE_HEAT_API_IMAGE_URL_DEFAULT"}: .: {} f:name: {} f:value: {} k:{"name":"RELATED_IMAGE_HEAT_CFNAPI_IMAGE_URL_DEFAULT"}: .: {} f:name: {} f:value: {} k:{"name":"RELATED_IMAGE_HEAT_ENGINE_IMAGE_URL_DEFAULT"}: .: {} f:name: {} f:value: {} k:{"name":"RELATED_IMAGE_HORIZON_IMAGE_URL_DEFAULT"}: .: {} f:name: {} f:value: {} k:{"name":"RELATED_IMAGE_INFRA_MEMCACHED_IMAGE_URL_DEFAULT"}: .: {} f:name: {} f:value: {} k:{"name":"RELATED_IMAGE_INFRA_REDIS_IMAGE_URL_DEFAULT"}: .: {} f:name: {} f:value: {} k:{"name":"RELATED_IMAGE_IRONIC_API_IMAGE_URL_DEFAULT"}: .: {} f:name: {} f:value: {} k:{"name":"RELATED_IMAGE_IRONIC_CONDUCTOR_IMAGE_URL_DEFAULT"}: .: {} f:name: {} f:value: {} k:{"name":"RELATED_IMAGE_IRONIC_INSPECTOR_IMAGE_URL_DEFAULT"}: .: {} f:name: {} f:value: {} k:{"name":"RELATED_IMAGE_IRONIC_NEUTRON_AGENT_IMAGE_URL_DEFAULT"}: .: {} f:name: {} f:value: {} k:{"name":"RELATED_IMAGE_IRONIC_PXE_IMAGE_URL_DEFAULT"}: .: {} f:name: {} f:value: {} k:{"name":"RELATED_IMAGE_IRONIC_PYTHON_AGENT_IMAGE_URL_DEFAULT"}: .: {} f:name: {} f:value: {} k:{"name":"RELATED_IMAGE_KEYSTONE_API_IMAGE_URL_DEFAULT"}: .: {} f:name: {} f:value: {} k:{"name":"RELATED_IMAGE_KSM_IMAGE_URL_DEFAULT"}: .: {} f:name: {} f:value: {} k:{"name":"RELATED_IMAGE_MANILA_API_IMAGE_URL_DEFAULT"}: .: {} f:name: {} f:value: {} k:{"name":"RELATED_IMAGE_MANILA_SCHEDULER_IMAGE_URL_DEFAULT"}: .: {} f:name: {} f:value: {} k:{"name":"RELATED_IMAGE_MANILA_SHARE_IMAGE_URL_DEFAULT"}: .: {} f:name: {} f:value: {} k:{"name":"RELATED_IMAGE_MARIADB_IMAGE_URL_DEFAULT"}: .: {} f:name: {} f:value: {} k:{"name":"RELATED_IMAGE_NET_UTILS_IMAGE_URL_DEFAULT"}: .: {} f:name: {} f:value: {} k:{"name":"RELATED_IMAGE_NEUTRON_API_IMAGE_URL_DEFAULT"}: .: {} f:name: {} f:value: {} k:{"name":"RELATED_IMAGE_NOVA_API_IMAGE_URL_DEFAULT"}: .: {} f:name: {} f:value: {} k:{"name":"RELATED_IMAGE_NOVA_COMPUTE_IMAGE_URL_DEFAULT"}: .: {} f:name: {} f:value: {} k:{"name":"RELATED_IMAGE_NOVA_CONDUCTOR_IMAGE_URL_DEFAULT"}: .: {} f:name: {} f:value: {} k:{"name":"RELATED_IMAGE_NOVA_NOVNC_IMAGE_URL_DEFAULT"}: .: {} f:name: {} f:value: {} k:{"name":"RELATED_IMAGE_NOVA_SCHEDULER_IMAGE_URL_DEFAULT"}: .: {} f:name: {} f:value: {} k:{"name":"RELATED_IMAGE_OCTAVIA_API_IMAGE_URL_DEFAULT"}: .: {} f:name: {} f:value: {} k:{"name":"RELATED_IMAGE_OCTAVIA_HEALTHMANAGER_IMAGE_URL_DEFAULT"}: .: {} f:name: {} f:value: {} k:{"name":"RELATED_IMAGE_OCTAVIA_HOUSEKEEPING_IMAGE_URL_DEFAULT"}: .: {} f:name: {} f:value: {} k:{"name":"RELATED_IMAGE_OCTAVIA_RSYSLOG_IMAGE_URL_DEFAULT"}: .: {} f:name: {} f:value: {} k:{"name":"RELATED_IMAGE_OCTAVIA_WORKER_IMAGE_URL_DEFAULT"}: .: {} f:name: {} f:value: {} k:{"name":"RELATED_IMAGE_OPENSTACK_CLIENT_IMAGE_URL_DEFAULT"}: .: {} f:name: {} f:value: {} k:{"name":"RELATED_IMAGE_OPENSTACK_MUST_GATHER_DEFAULT"}: .: {} f:name: {} f:value: {} k:{"name":"RELATED_IMAGE_OPENSTACK_NETWORK_EXPORTER_IMAGE_URL_DEFAULT"}: .: {} f:name: {} f:value: {} k:{"name":"RELATED_IMAGE_OS_CONTAINER_IMAGE_URL_DEFAULT"}: .: {} f:name: {} f:value: {} k:{"name":"RELATED_IMAGE_OVN_CONTROLLER_IMAGE_URL_DEFAULT"}: .: {} f:name: {} f:value: {} k:{"name":"RELATED_IMAGE_OVN_CONTROLLER_OVS_IMAGE_URL_DEFAULT"}: .: {} f:name: {} f:value: {} k:{"name":"RELATED_IMAGE_OVN_NB_DBCLUSTER_IMAGE_URL_DEFAULT"}: .: {} f:name: {} f:value: {} k:{"name":"RELATED_IMAGE_OVN_NORTHD_IMAGE_URL_DEFAULT"}: .: {} f:name: {} f:value: {} k:{"name":"RELATED_IMAGE_OVN_SB_DBCLUSTER_IMAGE_URL_DEFAULT"}: .: {} f:name: {} f:value: {} k:{"name":"RELATED_IMAGE_PLACEMENT_API_IMAGE_URL_DEFAULT"}: .: {} f:name: {} f:value: {} k:{"name":"RELATED_IMAGE_RABBITMQ_IMAGE_URL_DEFAULT"}: .: {} f:name: {} f:value: {} k:{"name":"RELATED_IMAGE_SWIFT_ACCOUNT_IMAGE_URL_DEFAULT"}: .: {} f:name: {} f:value: {} k:{"name":"RELATED_IMAGE_SWIFT_CONTAINER_IMAGE_URL_DEFAULT"}: .: {} f:name: {} f:value: {} k:{"name":"RELATED_IMAGE_SWIFT_OBJECT_IMAGE_URL_DEFAULT"}: .: {} f:name: {} f:value: {} k:{"name":"RELATED_IMAGE_SWIFT_PROXY_IMAGE_URL_DEFAULT"}: .: {} f:name: {} f:value: {} k:{"name":"RELATED_IMAGE_TEST_TEMPEST_IMAGE_URL_DEFAULT"}: .: {} f:name: {} f:value: {} k:{"name":"RELATED_IMAGE_WATCHER_API_IMAGE_URL_DEFAULT"}: .: {} f:name: {} f:value: {} k:{"name":"RELATED_IMAGE_WATCHER_APPLIER_IMAGE_URL_DEFAULT"}: .: {} f:name: {} f:value: {} k:{"name":"RELATED_IMAGE_WATCHER_DECISION_ENGINE_IMAGE_URL_DEFAULT"}: .: {} f:name: {} f:value: {} k:{"name":"RENEW_DEADLINE"}: .: {} f:name: {} f:value: {} k:{"name":"RETRY_PERIOD"}: .: {} f:name: {} f:value: {} f:image: {} f:imagePullPolicy: {} f:livenessProbe: .: {} f:failureThreshold: {} f:httpGet: .: {} f:path: {} f:port: {} f:scheme: {} f:initialDelaySeconds: {} f:periodSeconds: {} f:successThreshold: {} f:timeoutSeconds: {} f:name: {} f:readinessProbe: .: {} f:failureThreshold: {} f:httpGet: .: {} f:path: {} f:port: {} f:scheme: {} f:initialDelaySeconds: {} f:periodSeconds: {} f:successThreshold: {} f:timeoutSeconds: {} f:resources: .: {} f:limits: .: {} f:cpu: {} f:memory: {} f:requests: .: {} f:cpu: {} f:memory: {} f:securityContext: .: {} f:allowPrivilegeEscalation: {} f:terminationMessagePath: {} f:terminationMessagePolicy: {} f:volumeMounts: .: {} k:{"mountPath":"/tmp/k8s-webhook-server/serving-certs"}: .: {} f:mountPath: {} f:name: {} f:readOnly: {} f:dnsPolicy: {} f:enableServiceLinks: {} f:restartPolicy: {} f:schedulerName: {} f:securityContext: .: {} f:runAsNonRoot: {} f:serviceAccount: {} f:serviceAccountName: {} f:terminationGracePeriodSeconds: {} f:tolerations: {} f:volumes: .: {} k:{"name":"cert"}: .: {} f:name: {} f:secret: .: {} f:defaultMode: {} f:secretName: {} manager: kube-controller-manager operation: Update time: '2025-12-13T06:59:15Z' - apiVersion: v1 fieldsType: FieldsV1 fieldsV1: f:metadata: f:annotations: f:k8s.v1.cni.cncf.io/network-status: {} manager: multus-daemon operation: Update subresource: status time: '2025-12-13T06:59:32Z' - apiVersion: v1 fieldsType: FieldsV1 fieldsV1: f:status: f:conditions: k:{"type":"ContainersReady"}: .: {} f:lastProbeTime: {} f:lastTransitionTime: {} f:status: {} f:type: {} k:{"type":"Initialized"}: .: {} f:lastProbeTime: {} f:lastTransitionTime: {} f:status: {} f:type: {} k:{"type":"PodReadyToStartContainers"}: .: {} f:lastProbeTime: {} f:lastTransitionTime: {} f:status: {} f:type: {} k:{"type":"Ready"}: .: {} f:lastProbeTime: {} f:lastTransitionTime: {} f:status: {} f:type: {} f:containerStatuses: {} f:hostIP: {} f:hostIPs: {} f:phase: {} f:podIP: {} f:podIPs: .: {} k:{"ip":"10.217.0.80"}: .: {} f:ip: {} f:startTime: {} manager: kubelet operation: Update subresource: status time: '2025-12-13T06:59:51Z' name: openstack-baremetal-operator-controller-manager-689f887b544qprx namespace: openstack-operators ownerReferences: - apiVersion: apps/v1 blockOwnerDeletion: true controller: true kind: ReplicaSet name: openstack-baremetal-operator-controller-manager-689f887b54 uid: 16a4fa1d-d0bb-4efe-a27c-d3270e4569eb resourceVersion: '37007' uid: 7cd61a98-cc77-41b1-a06f-912207565b37 spec: containers: - args: - --leader-elect - --health-probe-bind-address=:8081 - --metrics-bind-address=127.0.0.1:8080 command: - /manager env: - name: LEASE_DURATION value: '30' - name: RENEW_DEADLINE value: '20' - name: RETRY_PERIOD value: '5' - name: ENABLE_WEBHOOKS value: 'true' - name: RELATED_IMAGE_AGENT_IMAGE_URL_DEFAULT value: quay.io/openstack-k8s-operators/openstack-baremetal-operator-agent@sha256:add611bf73d5aab1ac07ef665281ed0e5ad1aded495b8b32927aa2e726abb29a - name: RELATED_IMAGE_ANSIBLEEE_IMAGE_URL_DEFAULT value: quay.io/openstack-k8s-operators/openstack-ansibleee-runner@sha256:5a3782b78f695106548597c758c23e5d812e81cb0b860f1fd4fe88587351337e - name: RELATED_IMAGE_AODH_API_IMAGE_URL_DEFAULT value: quay.io/podified-antelope-centos9/openstack-aodh-api@sha256:36946a77001110f391fb254ec77129803a6b7c34dacfa1a4c8c51aa8d23d57c5 - name: RELATED_IMAGE_AODH_EVALUATOR_IMAGE_URL_DEFAULT value: quay.io/podified-antelope-centos9/openstack-aodh-evaluator@sha256:dd58b29b5d88662a621c685c2b76fe8a71cc9e82aa85dff22a66182a6ceef3ae - name: RELATED_IMAGE_AODH_LISTENER_IMAGE_URL_DEFAULT value: quay.io/podified-antelope-centos9/openstack-aodh-listener@sha256:fc47ed1c6249c9f6ef13ef1eac82d5a34819a715dea5117d33df0d0dc69ace8b - name: RELATED_IMAGE_AODH_NOTIFIER_IMAGE_URL_DEFAULT value: quay.io/podified-antelope-centos9/openstack-aodh-notifier@sha256:e21d35c272d016f4dbd323dc827ee83538c96674adfb188e362aa652ce167b61 - name: RELATED_IMAGE_APACHE_IMAGE_URL_DEFAULT value: registry.redhat.io/ubi9/httpd-24@sha256:6b929971283d69f485a7d3e449fb5a3dd65d5a4de585c73419e776821d00062c - name: RELATED_IMAGE_BARBICAN_API_IMAGE_URL_DEFAULT value: quay.io/podified-antelope-centos9/openstack-barbican-api@sha256:fe32d3ea620f0c7ecfdde9bbf28417fde03bc18c6f60b1408fa8da24d8188f16 - name: RELATED_IMAGE_BARBICAN_KEYSTONE_LISTENER_IMAGE_URL_DEFAULT value: quay.io/podified-antelope-centos9/openstack-barbican-keystone-listener@sha256:c2ace235f775334be02d78928802b76309543e869cc6b4b55843ee546691e6c3 - name: RELATED_IMAGE_BARBICAN_WORKER_IMAGE_URL_DEFAULT value: quay.io/podified-antelope-centos9/openstack-barbican-worker@sha256:be77cc58b87f299b42bb2cbe74f3f8d028b8c887851a53209441b60e1363aeb5 - name: RELATED_IMAGE_CEILOMETER_CENTRAL_IMAGE_URL_DEFAULT value: quay.io/podified-antelope-centos9/openstack-ceilometer-central@sha256:5a548c25fe3d02f7a042cb0a6d28fc8039a34c4a3b3d07aadda4aba3a926e777 - name: RELATED_IMAGE_CEILOMETER_COMPUTE_IMAGE_URL_DEFAULT value: quay.io/podified-antelope-centos9/openstack-ceilometer-compute@sha256:41dc9cf27a902d9c7b392d730bd761cf3c391a548a841e9e4d38e1571f3c53bf - name: RELATED_IMAGE_CEILOMETER_IPMI_IMAGE_URL_DEFAULT value: quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi@sha256:174f8f712eb5fdda5061a1a68624befb27bbe766842653788583ec74c5ae506a - name: RELATED_IMAGE_CEILOMETER_MYSQLD_EXPORTER_IMAGE_URL_DEFAULT value: quay.io/prometheus/mysqld-exporter@sha256:7211a617ec657701ca819aa0ba28e1d5750f5bf2c1391b755cc4a48cc360b0fa - name: RELATED_IMAGE_CEILOMETER_NOTIFICATION_IMAGE_URL_DEFAULT value: quay.io/podified-antelope-centos9/openstack-ceilometer-notification@sha256:df14f6de785b8aefc38ceb5b47088405224cfa914977c9ab811514cc77b08a67 - name: RELATED_IMAGE_CEILOMETER_SGCORE_IMAGE_URL_DEFAULT value: quay.io/openstack-k8s-operators/sg-core@sha256:09b5017c95d7697e66b9c64846bc48ef5826a009cba89b956ec54561e5f4a2d1 - name: RELATED_IMAGE_CINDER_API_IMAGE_URL_DEFAULT value: quay.io/podified-antelope-centos9/openstack-cinder-api@sha256:b59b7445e581cc720038107e421371c86c5765b2967e77d884ef29b1d9fd0f49 - name: RELATED_IMAGE_CINDER_BACKUP_IMAGE_URL_DEFAULT value: quay.io/podified-antelope-centos9/openstack-cinder-backup@sha256:b8d76f96b6f17a3318d089c0b5c0e6c292d969ab392cdcc708ec0f0188c953ae - name: RELATED_IMAGE_CINDER_SCHEDULER_IMAGE_URL_DEFAULT value: quay.io/podified-antelope-centos9/openstack-cinder-scheduler@sha256:43c55407c7c9b4141482533546e6570535373f7e36df374dfbbe388293c19dbf - name: RELATED_IMAGE_CINDER_VOLUME_IMAGE_URL_DEFAULT value: quay.io/podified-antelope-centos9/openstack-cinder-volume@sha256:097816f289af117f14cd8ee1678a9635e8da6de4a1bde834d02199c4ef65c5c0 - name: RELATED_IMAGE_CLOUDKITTY_API_IMAGE_URL_DEFAULT value: quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api@sha256:744c4b41194e2cb21e83147626d64fd72438a72d51bb32c3ad90cf1f9711fed1 - name: RELATED_IMAGE_CLOUDKITTY_PROC_IMAGE_URL_DEFAULT value: quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-processor@sha256:c980be07bda5796425ea2d727826efb48caf3927a425751d5609915a7f68e87e - name: RELATED_IMAGE_DESIGNATE_API_IMAGE_URL_DEFAULT value: quay.io/podified-antelope-centos9/openstack-designate-api@sha256:281668af8ed34c2464f3593d350cf7b695b41b81f40cc539ad74b7b65822afb9 - name: RELATED_IMAGE_DESIGNATE_BACKENDBIND9_IMAGE_URL_DEFAULT value: quay.io/podified-antelope-centos9/openstack-designate-backend-bind9@sha256:84319e5dd6569ea531e64b688557c2a2e20deb5225f3d349e402e34858f00fe7 - name: RELATED_IMAGE_DESIGNATE_CENTRAL_IMAGE_URL_DEFAULT value: quay.io/podified-antelope-centos9/openstack-designate-central@sha256:acb53e0e210562091843c212bc0cf5541daacd6f2bd18923430bae8c36578731 - name: RELATED_IMAGE_DESIGNATE_MDNS_IMAGE_URL_DEFAULT value: quay.io/podified-antelope-centos9/openstack-designate-mdns@sha256:be6f4002842ebadf30d035721567a7e669f12a6eef8c00dc89030b3b08f3dd2c - name: RELATED_IMAGE_DESIGNATE_PRODUCER_IMAGE_URL_DEFAULT value: quay.io/podified-antelope-centos9/openstack-designate-producer@sha256:988635be61f6ed8c0d707622193b7efe8e9b1dc7effbf9b09d2db5ec593b59e7 - name: RELATED_IMAGE_DESIGNATE_UNBOUND_IMAGE_URL_DEFAULT value: quay.io/podified-antelope-centos9/openstack-unbound@sha256:63e08752678a68571e1c54ceea42c113af493a04cdc22198a3713df7b53f87e5 - name: RELATED_IMAGE_DESIGNATE_WORKER_IMAGE_URL_DEFAULT value: quay.io/podified-antelope-centos9/openstack-designate-worker@sha256:6741d06b0f1bbeb2968807dc5be45853cdd3dfb9cc7ea6ef23e909ae24f3cbf4 - name: RELATED_IMAGE_EDPM_FRR_IMAGE_URL_DEFAULT value: quay.io/podified-antelope-centos9/openstack-frr@sha256:1803a36d1a397a5595dddb4a2f791ab9443d3af97391a53928fa495ca7032d93 - name: RELATED_IMAGE_EDPM_ISCSID_IMAGE_URL_DEFAULT value: quay.io/podified-antelope-centos9/openstack-iscsid@sha256:d163fcf801d67d9c67b2ae4368675b75714db7c531de842aad43979a888c5d57 - name: RELATED_IMAGE_EDPM_KEPLER_IMAGE_URL_DEFAULT value: quay.io/sustainable_computing_io/kepler@sha256:581b65b646301e0fcb07582150ba63438f1353a85bf9acf1eb2acb4ce71c58bd - name: RELATED_IMAGE_EDPM_LOGROTATE_CROND_IMAGE_URL_DEFAULT value: quay.io/podified-antelope-centos9/openstack-cron@sha256:15bf81d933a44128cb6f3264632a9563337eb3bfe82c4a33c746595467d3b0c3 - name: RELATED_IMAGE_EDPM_MULTIPATHD_IMAGE_URL_DEFAULT value: quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f - name: RELATED_IMAGE_EDPM_NEUTRON_DHCP_AGENT_IMAGE_URL_DEFAULT value: quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent@sha256:3a08e21338f651a90ee83ae46242b8c80c64488144f27a77848517049c3a8f5d - name: RELATED_IMAGE_EDPM_NEUTRON_METADATA_AGENT_IMAGE_URL_DEFAULT value: quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2 - name: RELATED_IMAGE_EDPM_NEUTRON_OVN_AGENT_IMAGE_URL_DEFAULT value: quay.io/podified-antelope-centos9/openstack-neutron-ovn-agent@sha256:ebeb4443ab9f9360925f7abd9c24b7a453390d678f79ed247d2042dcc6f9c3fc - name: RELATED_IMAGE_EDPM_NEUTRON_SRIOV_AGENT_IMAGE_URL_DEFAULT value: quay.io/podified-antelope-centos9/openstack-neutron-sriov-agent@sha256:04bb4cd601b08034c6cba18e701fcd36026ec4340402ed710a0bbd09d8e4884d - name: RELATED_IMAGE_EDPM_NODE_EXPORTER_IMAGE_URL_DEFAULT value: quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c - name: RELATED_IMAGE_EDPM_OVN_BGP_AGENT_IMAGE_URL_DEFAULT value: quay.io/podified-antelope-centos9/openstack-ovn-bgp-agent@sha256:27b80783b7d4658d89dda9a09924e9ee472908a8fa1c86bcf3f773d17a4196e0 - name: RELATED_IMAGE_EDPM_PODMAN_EXPORTER_IMAGE_URL_DEFAULT value: quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd - name: RELATED_IMAGE_GLANCE_API_IMAGE_URL_DEFAULT value: quay.io/podified-antelope-centos9/openstack-glance-api@sha256:e4aa4ebbb1e581a12040e9ad2ae2709ac31b5d965bb64fc4252d1028b05c565f - name: RELATED_IMAGE_HEAT_API_IMAGE_URL_DEFAULT value: quay.io/podified-antelope-centos9/openstack-heat-api@sha256:8cb133c5a5551e1aa11ef3326149db1babbf00924d0ff493ebe3346b69fd4b5b - name: RELATED_IMAGE_HEAT_CFNAPI_IMAGE_URL_DEFAULT value: quay.io/podified-antelope-centos9/openstack-heat-api-cfn@sha256:13c3567176bb2d033f6c6b30e20404bd67a217e2537210bf222f3afe0c8619b7 - name: RELATED_IMAGE_HEAT_ENGINE_IMAGE_URL_DEFAULT value: quay.io/podified-antelope-centos9/openstack-heat-engine@sha256:60ac3446d57f1a97a6ca2d8e6584b00aa18704bc2707a7ac1a6a28c6d685d215 - name: RELATED_IMAGE_HORIZON_IMAGE_URL_DEFAULT value: quay.io/podified-antelope-centos9/openstack-horizon@sha256:dd7600bc5278c663cfcfecafd3fb051a2cd2ddc3c1efb07738bf09512aa23ae7 - name: RELATED_IMAGE_INFRA_MEMCACHED_IMAGE_URL_DEFAULT value: quay.io/podified-antelope-centos9/openstack-memcached@sha256:e47191ba776414b781b3e27b856ab45a03b9480c7dc2b1addb939608794882dc - name: RELATED_IMAGE_INFRA_REDIS_IMAGE_URL_DEFAULT value: quay.io/podified-antelope-centos9/openstack-redis@sha256:7e7788d1aae251e60f4012870140c65bce9760cd27feaeec5f65c42fe4ffce77 - name: RELATED_IMAGE_IRONIC_API_IMAGE_URL_DEFAULT value: quay.io/podified-antelope-centos9/openstack-ironic-api@sha256:6a401117007514660c694248adce8136d83559caf1b38e475935335e09ac954a - name: RELATED_IMAGE_IRONIC_CONDUCTOR_IMAGE_URL_DEFAULT value: quay.io/podified-antelope-centos9/openstack-ironic-conductor@sha256:364d50f873551805782c23264570eff40e3807f35d9bccdd456515b4e31da488 - name: RELATED_IMAGE_IRONIC_INSPECTOR_IMAGE_URL_DEFAULT value: quay.io/podified-antelope-centos9/openstack-ironic-inspector@sha256:2d72dd490576e0cb670d21a08420888f3758d64ed0cbd2ef8b9aa8488ad2ce40 - name: RELATED_IMAGE_IRONIC_NEUTRON_AGENT_IMAGE_URL_DEFAULT value: quay.io/podified-antelope-centos9/openstack-ironic-neutron-agent@sha256:96fdf7cddf31509ee63950a9d61320d0b01beb1212e28f37a6e872d6589ded22 - name: RELATED_IMAGE_IRONIC_PXE_IMAGE_URL_DEFAULT value: quay.io/podified-antelope-centos9/openstack-ironic-pxe@sha256:8b7534a2999075f919fc162d21f76026e8bf781913cc3d2ac07e484e9b2fc596 - name: RELATED_IMAGE_IRONIC_PYTHON_AGENT_IMAGE_URL_DEFAULT value: quay.io/podified-antelope-centos9/ironic-python-agent@sha256:d65eaaea2ab02d63af9d8a106619908fa01a2e56bd6753edc5590e66e46270db - name: RELATED_IMAGE_KEYSTONE_API_IMAGE_URL_DEFAULT value: quay.io/podified-antelope-centos9/openstack-keystone@sha256:d042d7f91bafb002affff8cf750d694a0da129377255c502028528fe2280e790 - name: RELATED_IMAGE_KSM_IMAGE_URL_DEFAULT value: registry.k8s.io/kube-state-metrics/kube-state-metrics@sha256:db384bf43222b066c378e77027a675d4cd9911107adba46c2922b3a55e10d6fb - name: RELATED_IMAGE_MANILA_API_IMAGE_URL_DEFAULT value: quay.io/podified-antelope-centos9/openstack-manila-api@sha256:a8faef9ea5e8ef8327b7fbb9b9cafc74c38c09c7e3b2365a7cad5eb49766f71d - name: RELATED_IMAGE_MANILA_SCHEDULER_IMAGE_URL_DEFAULT value: quay.io/podified-antelope-centos9/openstack-manila-scheduler@sha256:88aa46ea03a5584560806aa4b093584fda6b2f54c562005b72be2e3615688090 - name: RELATED_IMAGE_MANILA_SHARE_IMAGE_URL_DEFAULT value: quay.io/podified-antelope-centos9/openstack-manila-share@sha256:c08ecdfb7638c1897004347d835bdbabacff40a345f64c2b3111c377096bfa56 - name: RELATED_IMAGE_MARIADB_IMAGE_URL_DEFAULT value: quay.io/podified-antelope-centos9/openstack-mariadb@sha256:ed0f8ba03f3ce47a32006d730c3049455325eb2c3b98b9fd6b3fb9901004df13 - name: RELATED_IMAGE_NET_UTILS_IMAGE_URL_DEFAULT value: quay.io/podified-antelope-centos9/openstack-netutils@sha256:8b4025a4f30e83acc0b51ac063eea701006a302a1acbdec53f54b540270887f7 - name: RELATED_IMAGE_NEUTRON_API_IMAGE_URL_DEFAULT value: quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:ea0bf67f1aa5d95a9a07b9c8692c293470f1311792c55d3d57f1f92e56689c33 - name: RELATED_IMAGE_NOVA_API_IMAGE_URL_DEFAULT value: quay.io/podified-antelope-centos9/openstack-nova-api@sha256:4992f5ddbd20cca07e750846b2dbe7c51c5766c3002c388f8d8a158e347ec63d - name: RELATED_IMAGE_NOVA_COMPUTE_IMAGE_URL_DEFAULT value: quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:526afed30c44ef41d54d63a4f4db122bc603f775243ae350a59d2e0b5050076b - name: RELATED_IMAGE_NOVA_CONDUCTOR_IMAGE_URL_DEFAULT value: quay.io/podified-antelope-centos9/openstack-nova-conductor@sha256:22f097cb86b28ac48dc670ed7e0e841280bef1608f11b2b4536fbc2d2a6a90be - name: RELATED_IMAGE_NOVA_NOVNC_IMAGE_URL_DEFAULT value: quay.io/podified-antelope-centos9/openstack-nova-novncproxy@sha256:20b3ad38accb9eb8849599280a263d3436a5af03d89645e5ec4508586297ffde - name: RELATED_IMAGE_NOVA_SCHEDULER_IMAGE_URL_DEFAULT value: quay.io/podified-antelope-centos9/openstack-nova-scheduler@sha256:378ed518b68ea809cffa2ff7a93d51e52cfc53af14eedc978924fdabccef0325 - name: RELATED_IMAGE_OCTAVIA_API_IMAGE_URL_DEFAULT value: quay.io/podified-antelope-centos9/openstack-octavia-api@sha256:8c3632033f8c004f31a1c7c57c5ca7b450a11e9170a220b8943b57f80717c70c - name: RELATED_IMAGE_OCTAVIA_HEALTHMANAGER_IMAGE_URL_DEFAULT value: quay.io/podified-antelope-centos9/openstack-octavia-health-manager@sha256:3f746f7c6a8c48c0f4a800dcb4bc49bfbc4de4a9ca6a55d8f22bc515a92ea1d9 - name: RELATED_IMAGE_OCTAVIA_HOUSEKEEPING_IMAGE_URL_DEFAULT value: quay.io/podified-antelope-centos9/openstack-octavia-housekeeping@sha256:e1f7bf105190c3cbbfcf0aeeb77a92d1466100ba8377221ed5eee228949e05bd - name: RELATED_IMAGE_OCTAVIA_RSYSLOG_IMAGE_URL_DEFAULT value: quay.io/podified-antelope-centos9/openstack-rsyslog@sha256:954b4c60705b229a968aba3b5b35ab02759378706103ed1189fae3e3316fac35 - name: RELATED_IMAGE_OCTAVIA_WORKER_IMAGE_URL_DEFAULT value: quay.io/podified-antelope-centos9/openstack-octavia-worker@sha256:f2e0025727efb95efa65e6af6338ae3fc79bf61095d6d54931a0be8d7fe9acac - name: RELATED_IMAGE_OPENSTACK_CLIENT_IMAGE_URL_DEFAULT value: quay.io/podified-antelope-centos9/openstack-openstackclient@sha256:2b4f8494513a3af102066fec5868ab167ac8664aceb2f0c639d7a0b60260a944 - name: RELATED_IMAGE_OPENSTACK_MUST_GATHER_DEFAULT value: quay.io/openstack-k8s-operators/openstack-must-gather@sha256:854a802357b4f565a366fce3bf29b20c1b768ec4ab7e822ef52dfc2fef000d2c - name: RELATED_IMAGE_OPENSTACK_NETWORK_EXPORTER_IMAGE_URL_DEFAULT value: quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7 - name: RELATED_IMAGE_OS_CONTAINER_IMAGE_URL_DEFAULT value: quay.io/podified-antelope-centos9/edpm-hardened-uefi@sha256:194121c2d79401bd41f75428a437fe32a5806a6a160f7d80798ff66baed9afa5 - name: RELATED_IMAGE_OVN_CONTROLLER_IMAGE_URL_DEFAULT value: quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de - name: RELATED_IMAGE_OVN_CONTROLLER_OVS_IMAGE_URL_DEFAULT value: quay.io/podified-antelope-centos9/openstack-ovn-base@sha256:df45459c449f64cc6471e98c0890ac00dcc77a940f85d4e7e9d9dd52990d65b3 - name: RELATED_IMAGE_OVN_NB_DBCLUSTER_IMAGE_URL_DEFAULT value: quay.io/podified-antelope-centos9/openstack-ovn-nb-db-server@sha256:947c1bb9373b7d3f2acea104a5666e394c830111bf80d133f1fe7238e4d06f28 - name: RELATED_IMAGE_OVN_NORTHD_IMAGE_URL_DEFAULT value: quay.io/podified-antelope-centos9/openstack-ovn-northd@sha256:425ebddc9d6851ee9c730e67eaf43039943dc7937fb11332a41335a9114b2d44 - name: RELATED_IMAGE_OVN_SB_DBCLUSTER_IMAGE_URL_DEFAULT value: quay.io/podified-antelope-centos9/openstack-ovn-sb-db-server@sha256:bea03c7c34dc6ef8bc163e12a8940011b8feebc44a2efaaba2d3c4c6c515d6c8 - name: RELATED_IMAGE_PLACEMENT_API_IMAGE_URL_DEFAULT value: quay.io/podified-antelope-centos9/openstack-placement-api@sha256:33f4e5f7a715d48482ec46a42267ea992fa268585303c4f1bd3cbea072a6348b - name: RELATED_IMAGE_RABBITMQ_IMAGE_URL_DEFAULT value: quay.io/podified-antelope-centos9/openstack-rabbitmq@sha256:e733252aab7f4bc0efbdd712bcd88e44c5498bf1773dba843bc9dcfac324fe3d - name: RELATED_IMAGE_SWIFT_ACCOUNT_IMAGE_URL_DEFAULT value: quay.io/podified-antelope-centos9/openstack-swift-account@sha256:a2280bc80b454dc9e5c95daf74b8a53d6f9e42fc16d45287e089fc41014fe1da - name: RELATED_IMAGE_SWIFT_CONTAINER_IMAGE_URL_DEFAULT value: quay.io/podified-antelope-centos9/openstack-swift-container@sha256:88d687a7bb593b2e61598b422baba84d67c114419590a6d83d15327d119ce208 - name: RELATED_IMAGE_SWIFT_OBJECT_IMAGE_URL_DEFAULT value: quay.io/podified-antelope-centos9/openstack-swift-object@sha256:2635e02b99d380b2e547013c09c6c8da01bc89b3d3ce570e4d8f8656c7635b0e - name: RELATED_IMAGE_SWIFT_PROXY_IMAGE_URL_DEFAULT value: quay.io/podified-antelope-centos9/openstack-swift-proxy-server@sha256:ac7fefe1c93839c7ccb2aaa0a18751df0e9f64a36a3b4cc1b81d82d7774b8b45 - name: RELATED_IMAGE_TEST_TEMPEST_IMAGE_URL_DEFAULT value: quay.io/podified-antelope-centos9/openstack-tempest-all@sha256:a357cf166caaeea230f8a912aceb042e3170c5d680844e8f97b936baa10834ed - name: RELATED_IMAGE_WATCHER_API_IMAGE_URL_DEFAULT value: quay.io/podified-master-centos9/openstack-watcher-api@sha256:bf2a07cbf4aec8e8283e14fb134605b15a61db6d3f7965a5e2e3cac66018c73a - name: RELATED_IMAGE_WATCHER_APPLIER_IMAGE_URL_DEFAULT value: quay.io/podified-master-centos9/openstack-watcher-applier@sha256:10a8ff59cb8b91189b60c6f28155b62cbe2983fb14c053d74967d219c4f8b2af - name: RELATED_IMAGE_WATCHER_DECISION_ENGINE_IMAGE_URL_DEFAULT value: quay.io/podified-master-centos9/openstack-watcher-decision-engine@sha256:4466fc51f6461209d9a75e53f13a88171143fe5977797a02406b57f32ffaf0ab - name: METRICS_CERTS value: 'false' image: quay.io/openstack-k8s-operators/openstack-baremetal-operator@sha256:9d539fb6b72f91cfc6200bb91b7c6dbaeab17c7711342dd3a9549c66762a2d48 imagePullPolicy: IfNotPresent livenessProbe: failureThreshold: 3 httpGet: path: /healthz port: 8081 scheme: HTTP initialDelaySeconds: 15 periodSeconds: 20 successThreshold: 1 timeoutSeconds: 1 name: manager readinessProbe: failureThreshold: 3 httpGet: path: /readyz port: 8081 scheme: HTTP initialDelaySeconds: 5 periodSeconds: 10 successThreshold: 1 timeoutSeconds: 1 resources: limits: cpu: 500m memory: 512Mi requests: cpu: 10m memory: 256Mi securityContext: allowPrivilegeEscalation: false capabilities: drop: - ALL runAsUser: 1000660000 terminationMessagePath: /dev/termination-log terminationMessagePolicy: File volumeMounts: - mountPath: /tmp/k8s-webhook-server/serving-certs name: cert readOnly: true - mountPath: /var/run/secrets/kubernetes.io/serviceaccount name: kube-api-access-pcn8m readOnly: true dnsPolicy: ClusterFirst enableServiceLinks: true imagePullSecrets: - name: openstack-baremetal-operator-controller-manager-dockercfg-q8l7h nodeName: crc preemptionPolicy: PreemptLowerPriority priority: 0 restartPolicy: Always schedulerName: default-scheduler securityContext: fsGroup: 1000660000 runAsNonRoot: true seLinuxOptions: level: s0:c26,c5 seccompProfile: type: RuntimeDefault serviceAccount: openstack-baremetal-operator-controller-manager serviceAccountName: openstack-baremetal-operator-controller-manager terminationGracePeriodSeconds: 10 tolerations: - effect: NoExecute key: node.kubernetes.io/not-ready operator: Exists tolerationSeconds: 120 - effect: NoExecute key: node.kubernetes.io/unreachable operator: Exists tolerationSeconds: 120 - effect: NoSchedule key: node.kubernetes.io/memory-pressure operator: Exists volumes: - name: cert secret: defaultMode: 420 secretName: openstack-baremetal-operator-webhook-server-cert - name: kube-api-access-pcn8m projected: defaultMode: 420 sources: - serviceAccountToken: expirationSeconds: 3607 path: token - configMap: items: - key: ca.crt path: ca.crt name: kube-root-ca.crt - downwardAPI: items: - fieldRef: apiVersion: v1 fieldPath: metadata.namespace path: namespace - configMap: items: - key: service-ca.crt path: service-ca.crt name: openshift-service-ca.crt status: conditions: - lastProbeTime: null lastTransitionTime: '2025-12-13T06:59:43Z' status: 'True' type: PodReadyToStartContainers - lastProbeTime: null lastTransitionTime: '2025-12-13T06:59:15Z' status: 'True' type: Initialized - lastProbeTime: null lastTransitionTime: '2025-12-13T06:59:51Z' status: 'True' type: Ready - lastProbeTime: null lastTransitionTime: '2025-12-13T06:59:51Z' status: 'True' type: ContainersReady - lastProbeTime: null lastTransitionTime: '2025-12-13T06:59:15Z' status: 'True' type: PodScheduled containerStatuses: - containerID: cri-o://8f30e0f35c49c60b2cacdf99fee483e8bc681a2bf469ce07dd98e6453468bc9a image: quay.io/openstack-k8s-operators/openstack-baremetal-operator@sha256:9d539fb6b72f91cfc6200bb91b7c6dbaeab17c7711342dd3a9549c66762a2d48 imageID: quay.io/openstack-k8s-operators/openstack-baremetal-operator@sha256:9d539fb6b72f91cfc6200bb91b7c6dbaeab17c7711342dd3a9549c66762a2d48 lastState: {} name: manager ready: true restartCount: 0 started: true state: running: startedAt: '2025-12-13T06:59:42Z' volumeMounts: - mountPath: /tmp/k8s-webhook-server/serving-certs name: cert readOnly: true recursiveReadOnly: Disabled - mountPath: /var/run/secrets/kubernetes.io/serviceaccount name: kube-api-access-pcn8m readOnly: true recursiveReadOnly: Disabled hostIP: 192.168.126.11 hostIPs: - ip: 192.168.126.11 phase: Running podIP: 10.217.0.80 podIPs: - ip: 10.217.0.80 qosClass: Burstable startTime: '2025-12-13T06:59:15Z' - apiVersion: v1 kind: Pod metadata: annotations: k8s.ovn.org/pod-networks: '{"default":{"ip_addresses":["10.217.0.87/23"],"mac_address":"0a:58:0a:d9:00:57","gateway_ips":["10.217.0.1"],"routes":[{"dest":"10.217.0.0/22","nextHop":"10.217.0.1"},{"dest":"10.217.4.0/23","nextHop":"10.217.0.1"},{"dest":"169.254.0.5/32","nextHop":"10.217.0.1"},{"dest":"100.64.0.0/16","nextHop":"10.217.0.1"}],"ip_address":"10.217.0.87/23","gateway_ip":"10.217.0.1","role":"primary"}}' k8s.v1.cni.cncf.io/network-status: |- [{ "name": "ovn-kubernetes", "interface": "eth0", "ips": [ "10.217.0.87" ], "mac": "0a:58:0a:d9:00:57", "default": true, "dns": {} }] kubectl.kubernetes.io/default-container: manager openshift.io/scc: restricted-v2 seccomp.security.alpha.kubernetes.io/pod: runtime/default creationTimestamp: '2025-12-13T06:59:15Z' generateName: openstack-operator-controller-manager-56f6fbdf6- labels: app.kubernetes.io/name: openstack-operator control-plane: controller-manager openstack.org/operator-name: openstack pod-template-hash: 56f6fbdf6 managedFields: - apiVersion: v1 fieldsType: FieldsV1 fieldsV1: f:metadata: f:annotations: f:k8s.ovn.org/pod-networks: {} manager: crc operation: Update subresource: status time: '2025-12-13T06:59:15Z' - apiVersion: v1 fieldsType: FieldsV1 fieldsV1: f:metadata: f:annotations: .: {} f:kubectl.kubernetes.io/default-container: {} f:generateName: {} f:labels: .: {} f:app.kubernetes.io/name: {} f:control-plane: {} f:openstack.org/operator-name: {} f:pod-template-hash: {} f:ownerReferences: .: {} k:{"uid":"6866dfb5-339d-4053-a26a-14d92949ca6b"}: {} f:spec: f:containers: k:{"name":"manager"}: .: {} f:args: {} f:command: {} f:env: .: {} k:{"name":"LEASE_DURATION"}: .: {} f:name: {} f:value: {} k:{"name":"OPENSTACK_RELEASE_VERSION"}: .: {} f:name: {} f:value: {} k:{"name":"RELATED_IMAGE_AGENT_IMAGE_URL_DEFAULT"}: .: {} f:name: {} f:value: {} k:{"name":"RELATED_IMAGE_ANSIBLEEE_IMAGE_URL_DEFAULT"}: .: {} f:name: {} f:value: {} k:{"name":"RELATED_IMAGE_AODH_API_IMAGE_URL_DEFAULT"}: .: {} f:name: {} f:value: {} k:{"name":"RELATED_IMAGE_AODH_EVALUATOR_IMAGE_URL_DEFAULT"}: .: {} f:name: {} f:value: {} k:{"name":"RELATED_IMAGE_AODH_LISTENER_IMAGE_URL_DEFAULT"}: .: {} f:name: {} f:value: {} k:{"name":"RELATED_IMAGE_AODH_NOTIFIER_IMAGE_URL_DEFAULT"}: .: {} f:name: {} f:value: {} k:{"name":"RELATED_IMAGE_APACHE_IMAGE_URL_DEFAULT"}: .: {} f:name: {} f:value: {} k:{"name":"RELATED_IMAGE_BARBICAN_API_IMAGE_URL_DEFAULT"}: .: {} f:name: {} f:value: {} k:{"name":"RELATED_IMAGE_BARBICAN_KEYSTONE_LISTENER_IMAGE_URL_DEFAULT"}: .: {} f:name: {} f:value: {} k:{"name":"RELATED_IMAGE_BARBICAN_WORKER_IMAGE_URL_DEFAULT"}: .: {} f:name: {} f:value: {} k:{"name":"RELATED_IMAGE_CEILOMETER_CENTRAL_IMAGE_URL_DEFAULT"}: .: {} f:name: {} f:value: {} k:{"name":"RELATED_IMAGE_CEILOMETER_COMPUTE_IMAGE_URL_DEFAULT"}: .: {} f:name: {} f:value: {} k:{"name":"RELATED_IMAGE_CEILOMETER_IPMI_IMAGE_URL_DEFAULT"}: .: {} f:name: {} f:value: {} k:{"name":"RELATED_IMAGE_CEILOMETER_MYSQLD_EXPORTER_IMAGE_URL_DEFAULT"}: .: {} f:name: {} f:value: {} k:{"name":"RELATED_IMAGE_CEILOMETER_NOTIFICATION_IMAGE_URL_DEFAULT"}: .: {} f:name: {} f:value: {} k:{"name":"RELATED_IMAGE_CEILOMETER_SGCORE_IMAGE_URL_DEFAULT"}: .: {} f:name: {} f:value: {} k:{"name":"RELATED_IMAGE_CINDER_API_IMAGE_URL_DEFAULT"}: .: {} f:name: {} f:value: {} k:{"name":"RELATED_IMAGE_CINDER_BACKUP_IMAGE_URL_DEFAULT"}: .: {} f:name: {} f:value: {} k:{"name":"RELATED_IMAGE_CINDER_SCHEDULER_IMAGE_URL_DEFAULT"}: .: {} f:name: {} f:value: {} k:{"name":"RELATED_IMAGE_CINDER_VOLUME_IMAGE_URL_DEFAULT"}: .: {} f:name: {} f:value: {} k:{"name":"RELATED_IMAGE_CLOUDKITTY_API_IMAGE_URL_DEFAULT"}: .: {} f:name: {} f:value: {} k:{"name":"RELATED_IMAGE_CLOUDKITTY_PROC_IMAGE_URL_DEFAULT"}: .: {} f:name: {} f:value: {} k:{"name":"RELATED_IMAGE_DESIGNATE_API_IMAGE_URL_DEFAULT"}: .: {} f:name: {} f:value: {} k:{"name":"RELATED_IMAGE_DESIGNATE_BACKENDBIND9_IMAGE_URL_DEFAULT"}: .: {} f:name: {} f:value: {} k:{"name":"RELATED_IMAGE_DESIGNATE_CENTRAL_IMAGE_URL_DEFAULT"}: .: {} f:name: {} f:value: {} k:{"name":"RELATED_IMAGE_DESIGNATE_MDNS_IMAGE_URL_DEFAULT"}: .: {} f:name: {} f:value: {} k:{"name":"RELATED_IMAGE_DESIGNATE_PRODUCER_IMAGE_URL_DEFAULT"}: .: {} f:name: {} f:value: {} k:{"name":"RELATED_IMAGE_DESIGNATE_UNBOUND_IMAGE_URL_DEFAULT"}: .: {} f:name: {} f:value: {} k:{"name":"RELATED_IMAGE_DESIGNATE_WORKER_IMAGE_URL_DEFAULT"}: .: {} f:name: {} f:value: {} k:{"name":"RELATED_IMAGE_EDPM_FRR_IMAGE_URL_DEFAULT"}: .: {} f:name: {} f:value: {} k:{"name":"RELATED_IMAGE_EDPM_ISCSID_IMAGE_URL_DEFAULT"}: .: {} f:name: {} f:value: {} k:{"name":"RELATED_IMAGE_EDPM_KEPLER_IMAGE_URL_DEFAULT"}: .: {} f:name: {} f:value: {} k:{"name":"RELATED_IMAGE_EDPM_LOGROTATE_CROND_IMAGE_URL_DEFAULT"}: .: {} f:name: {} f:value: {} k:{"name":"RELATED_IMAGE_EDPM_MULTIPATHD_IMAGE_URL_DEFAULT"}: .: {} f:name: {} f:value: {} k:{"name":"RELATED_IMAGE_EDPM_NEUTRON_DHCP_AGENT_IMAGE_URL_DEFAULT"}: .: {} f:name: {} f:value: {} k:{"name":"RELATED_IMAGE_EDPM_NEUTRON_METADATA_AGENT_IMAGE_URL_DEFAULT"}: .: {} f:name: {} f:value: {} k:{"name":"RELATED_IMAGE_EDPM_NEUTRON_OVN_AGENT_IMAGE_URL_DEFAULT"}: .: {} f:name: {} f:value: {} k:{"name":"RELATED_IMAGE_EDPM_NEUTRON_SRIOV_AGENT_IMAGE_URL_DEFAULT"}: .: {} f:name: {} f:value: {} k:{"name":"RELATED_IMAGE_EDPM_NODE_EXPORTER_IMAGE_URL_DEFAULT"}: .: {} f:name: {} f:value: {} k:{"name":"RELATED_IMAGE_EDPM_OVN_BGP_AGENT_IMAGE_URL_DEFAULT"}: .: {} f:name: {} f:value: {} k:{"name":"RELATED_IMAGE_EDPM_PODMAN_EXPORTER_IMAGE_URL_DEFAULT"}: .: {} f:name: {} f:value: {} k:{"name":"RELATED_IMAGE_GLANCE_API_IMAGE_URL_DEFAULT"}: .: {} f:name: {} f:value: {} k:{"name":"RELATED_IMAGE_HEAT_API_IMAGE_URL_DEFAULT"}: .: {} f:name: {} f:value: {} k:{"name":"RELATED_IMAGE_HEAT_CFNAPI_IMAGE_URL_DEFAULT"}: .: {} f:name: {} f:value: {} k:{"name":"RELATED_IMAGE_HEAT_ENGINE_IMAGE_URL_DEFAULT"}: .: {} f:name: {} f:value: {} k:{"name":"RELATED_IMAGE_HORIZON_IMAGE_URL_DEFAULT"}: .: {} f:name: {} f:value: {} k:{"name":"RELATED_IMAGE_INFRA_MEMCACHED_IMAGE_URL_DEFAULT"}: .: {} f:name: {} f:value: {} k:{"name":"RELATED_IMAGE_INFRA_REDIS_IMAGE_URL_DEFAULT"}: .: {} f:name: {} f:value: {} k:{"name":"RELATED_IMAGE_IRONIC_API_IMAGE_URL_DEFAULT"}: .: {} f:name: {} f:value: {} k:{"name":"RELATED_IMAGE_IRONIC_CONDUCTOR_IMAGE_URL_DEFAULT"}: .: {} f:name: {} f:value: {} k:{"name":"RELATED_IMAGE_IRONIC_INSPECTOR_IMAGE_URL_DEFAULT"}: .: {} f:name: {} f:value: {} k:{"name":"RELATED_IMAGE_IRONIC_NEUTRON_AGENT_IMAGE_URL_DEFAULT"}: .: {} f:name: {} f:value: {} k:{"name":"RELATED_IMAGE_IRONIC_PXE_IMAGE_URL_DEFAULT"}: .: {} f:name: {} f:value: {} k:{"name":"RELATED_IMAGE_IRONIC_PYTHON_AGENT_IMAGE_URL_DEFAULT"}: .: {} f:name: {} f:value: {} k:{"name":"RELATED_IMAGE_KEYSTONE_API_IMAGE_URL_DEFAULT"}: .: {} f:name: {} f:value: {} k:{"name":"RELATED_IMAGE_KSM_IMAGE_URL_DEFAULT"}: .: {} f:name: {} f:value: {} k:{"name":"RELATED_IMAGE_MANILA_API_IMAGE_URL_DEFAULT"}: .: {} f:name: {} f:value: {} k:{"name":"RELATED_IMAGE_MANILA_SCHEDULER_IMAGE_URL_DEFAULT"}: .: {} f:name: {} f:value: {} k:{"name":"RELATED_IMAGE_MANILA_SHARE_IMAGE_URL_DEFAULT"}: .: {} f:name: {} f:value: {} k:{"name":"RELATED_IMAGE_MARIADB_IMAGE_URL_DEFAULT"}: .: {} f:name: {} f:value: {} k:{"name":"RELATED_IMAGE_NET_UTILS_IMAGE_URL_DEFAULT"}: .: {} f:name: {} f:value: {} k:{"name":"RELATED_IMAGE_NEUTRON_API_IMAGE_URL_DEFAULT"}: .: {} f:name: {} f:value: {} k:{"name":"RELATED_IMAGE_NOVA_API_IMAGE_URL_DEFAULT"}: .: {} f:name: {} f:value: {} k:{"name":"RELATED_IMAGE_NOVA_COMPUTE_IMAGE_URL_DEFAULT"}: .: {} f:name: {} f:value: {} k:{"name":"RELATED_IMAGE_NOVA_CONDUCTOR_IMAGE_URL_DEFAULT"}: .: {} f:name: {} f:value: {} k:{"name":"RELATED_IMAGE_NOVA_NOVNC_IMAGE_URL_DEFAULT"}: .: {} f:name: {} f:value: {} k:{"name":"RELATED_IMAGE_NOVA_SCHEDULER_IMAGE_URL_DEFAULT"}: .: {} f:name: {} f:value: {} k:{"name":"RELATED_IMAGE_OCTAVIA_API_IMAGE_URL_DEFAULT"}: .: {} f:name: {} f:value: {} k:{"name":"RELATED_IMAGE_OCTAVIA_HEALTHMANAGER_IMAGE_URL_DEFAULT"}: .: {} f:name: {} f:value: {} k:{"name":"RELATED_IMAGE_OCTAVIA_HOUSEKEEPING_IMAGE_URL_DEFAULT"}: .: {} f:name: {} f:value: {} k:{"name":"RELATED_IMAGE_OCTAVIA_RSYSLOG_IMAGE_URL_DEFAULT"}: .: {} f:name: {} f:value: {} k:{"name":"RELATED_IMAGE_OCTAVIA_WORKER_IMAGE_URL_DEFAULT"}: .: {} f:name: {} f:value: {} k:{"name":"RELATED_IMAGE_OPENSTACK_CLIENT_IMAGE_URL_DEFAULT"}: .: {} f:name: {} f:value: {} k:{"name":"RELATED_IMAGE_OPENSTACK_MUST_GATHER_DEFAULT"}: .: {} f:name: {} f:value: {} k:{"name":"RELATED_IMAGE_OPENSTACK_NETWORK_EXPORTER_IMAGE_URL_DEFAULT"}: .: {} f:name: {} f:value: {} k:{"name":"RELATED_IMAGE_OS_CONTAINER_IMAGE_URL_DEFAULT"}: .: {} f:name: {} f:value: {} k:{"name":"RELATED_IMAGE_OVN_CONTROLLER_IMAGE_URL_DEFAULT"}: .: {} f:name: {} f:value: {} k:{"name":"RELATED_IMAGE_OVN_CONTROLLER_OVS_IMAGE_URL_DEFAULT"}: .: {} f:name: {} f:value: {} k:{"name":"RELATED_IMAGE_OVN_NB_DBCLUSTER_IMAGE_URL_DEFAULT"}: .: {} f:name: {} f:value: {} k:{"name":"RELATED_IMAGE_OVN_NORTHD_IMAGE_URL_DEFAULT"}: .: {} f:name: {} f:value: {} k:{"name":"RELATED_IMAGE_OVN_SB_DBCLUSTER_IMAGE_URL_DEFAULT"}: .: {} f:name: {} f:value: {} k:{"name":"RELATED_IMAGE_PLACEMENT_API_IMAGE_URL_DEFAULT"}: .: {} f:name: {} f:value: {} k:{"name":"RELATED_IMAGE_RABBITMQ_IMAGE_URL_DEFAULT"}: .: {} f:name: {} f:value: {} k:{"name":"RELATED_IMAGE_SWIFT_ACCOUNT_IMAGE_URL_DEFAULT"}: .: {} f:name: {} f:value: {} k:{"name":"RELATED_IMAGE_SWIFT_CONTAINER_IMAGE_URL_DEFAULT"}: .: {} f:name: {} f:value: {} k:{"name":"RELATED_IMAGE_SWIFT_OBJECT_IMAGE_URL_DEFAULT"}: .: {} f:name: {} f:value: {} k:{"name":"RELATED_IMAGE_SWIFT_PROXY_IMAGE_URL_DEFAULT"}: .: {} f:name: {} f:value: {} k:{"name":"RELATED_IMAGE_TEST_TEMPEST_IMAGE_URL_DEFAULT"}: .: {} f:name: {} f:value: {} k:{"name":"RELATED_IMAGE_WATCHER_API_IMAGE_URL_DEFAULT"}: .: {} f:name: {} f:value: {} k:{"name":"RELATED_IMAGE_WATCHER_APPLIER_IMAGE_URL_DEFAULT"}: .: {} f:name: {} f:value: {} k:{"name":"RELATED_IMAGE_WATCHER_DECISION_ENGINE_IMAGE_URL_DEFAULT"}: .: {} f:name: {} f:value: {} k:{"name":"RENEW_DEADLINE"}: .: {} f:name: {} f:value: {} k:{"name":"RETRY_PERIOD"}: .: {} f:name: {} f:value: {} f:image: {} f:imagePullPolicy: {} f:livenessProbe: .: {} f:failureThreshold: {} f:httpGet: .: {} f:path: {} f:port: {} f:scheme: {} f:initialDelaySeconds: {} f:periodSeconds: {} f:successThreshold: {} f:timeoutSeconds: {} f:name: {} f:ports: .: {} k:{"containerPort":9443,"protocol":"TCP"}: .: {} f:containerPort: {} f:name: {} f:protocol: {} f:readinessProbe: .: {} f:failureThreshold: {} f:httpGet: .: {} f:path: {} f:port: {} f:scheme: {} f:initialDelaySeconds: {} f:periodSeconds: {} f:successThreshold: {} f:timeoutSeconds: {} f:resources: .: {} f:limits: .: {} f:cpu: {} f:memory: {} f:requests: .: {} f:cpu: {} f:memory: {} f:securityContext: .: {} f:allowPrivilegeEscalation: {} f:capabilities: .: {} f:drop: {} f:terminationMessagePath: {} f:terminationMessagePolicy: {} f:volumeMounts: .: {} k:{"mountPath":"/tmp/k8s-metrics-server/metrics-certs"}: .: {} f:mountPath: {} f:name: {} f:readOnly: {} k:{"mountPath":"/tmp/k8s-webhook-server/serving-certs"}: .: {} f:mountPath: {} f:name: {} f:readOnly: {} f:dnsPolicy: {} f:enableServiceLinks: {} f:restartPolicy: {} f:schedulerName: {} f:securityContext: .: {} f:runAsNonRoot: {} f:seccompProfile: .: {} f:type: {} f:serviceAccount: {} f:serviceAccountName: {} f:terminationGracePeriodSeconds: {} f:volumes: .: {} k:{"name":"metrics-certs"}: .: {} f:name: {} f:secret: .: {} f:defaultMode: {} f:items: {} f:optional: {} f:secretName: {} k:{"name":"webhook-certs"}: .: {} f:name: {} f:secret: .: {} f:defaultMode: {} f:secretName: {} manager: kube-controller-manager operation: Update time: '2025-12-13T06:59:15Z' - apiVersion: v1 fieldsType: FieldsV1 fieldsV1: f:metadata: f:annotations: f:k8s.v1.cni.cncf.io/network-status: {} manager: multus-daemon operation: Update subresource: status time: '2025-12-13T06:59:48Z' - apiVersion: v1 fieldsType: FieldsV1 fieldsV1: f:status: f:conditions: k:{"type":"ContainersReady"}: .: {} f:lastProbeTime: {} f:lastTransitionTime: {} f:status: {} f:type: {} k:{"type":"Initialized"}: .: {} f:lastProbeTime: {} f:lastTransitionTime: {} f:status: {} f:type: {} k:{"type":"PodReadyToStartContainers"}: .: {} f:lastProbeTime: {} f:lastTransitionTime: {} f:status: {} f:type: {} k:{"type":"Ready"}: .: {} f:lastProbeTime: {} f:lastTransitionTime: {} f:status: {} f:type: {} f:containerStatuses: {} f:hostIP: {} f:hostIPs: {} f:phase: {} f:podIP: {} f:podIPs: .: {} k:{"ip":"10.217.0.87"}: .: {} f:ip: {} f:startTime: {} manager: kubelet operation: Update subresource: status time: '2025-12-13T06:59:58Z' name: openstack-operator-controller-manager-56f6fbdf6-q2xrp namespace: openstack-operators ownerReferences: - apiVersion: apps/v1 blockOwnerDeletion: true controller: true kind: ReplicaSet name: openstack-operator-controller-manager-56f6fbdf6 uid: 6866dfb5-339d-4053-a26a-14d92949ca6b resourceVersion: '37123' uid: 48353918-3568-4a9c-a5d2-709fb831ee75 spec: containers: - args: - --metrics-bind-address=:8443 - --leader-elect - --health-probe-bind-address=:8081 - --metrics-cert-path=/tmp/k8s-metrics-server/metrics-certs - --webhook-cert-path=/tmp/k8s-webhook-server/serving-certs command: - /manager env: - name: LEASE_DURATION value: '30' - name: RENEW_DEADLINE value: '20' - name: RETRY_PERIOD value: '5' - name: OPENSTACK_RELEASE_VERSION value: 0.5.0-1765567684 - name: RELATED_IMAGE_AGENT_IMAGE_URL_DEFAULT value: quay.io/openstack-k8s-operators/openstack-baremetal-operator-agent@sha256:add611bf73d5aab1ac07ef665281ed0e5ad1aded495b8b32927aa2e726abb29a - name: RELATED_IMAGE_ANSIBLEEE_IMAGE_URL_DEFAULT value: quay.io/openstack-k8s-operators/openstack-ansibleee-runner@sha256:5a3782b78f695106548597c758c23e5d812e81cb0b860f1fd4fe88587351337e - name: RELATED_IMAGE_AODH_API_IMAGE_URL_DEFAULT value: quay.io/podified-antelope-centos9/openstack-aodh-api@sha256:36946a77001110f391fb254ec77129803a6b7c34dacfa1a4c8c51aa8d23d57c5 - name: RELATED_IMAGE_AODH_EVALUATOR_IMAGE_URL_DEFAULT value: quay.io/podified-antelope-centos9/openstack-aodh-evaluator@sha256:dd58b29b5d88662a621c685c2b76fe8a71cc9e82aa85dff22a66182a6ceef3ae - name: RELATED_IMAGE_AODH_LISTENER_IMAGE_URL_DEFAULT value: quay.io/podified-antelope-centos9/openstack-aodh-listener@sha256:fc47ed1c6249c9f6ef13ef1eac82d5a34819a715dea5117d33df0d0dc69ace8b - name: RELATED_IMAGE_AODH_NOTIFIER_IMAGE_URL_DEFAULT value: quay.io/podified-antelope-centos9/openstack-aodh-notifier@sha256:e21d35c272d016f4dbd323dc827ee83538c96674adfb188e362aa652ce167b61 - name: RELATED_IMAGE_APACHE_IMAGE_URL_DEFAULT value: registry.redhat.io/ubi9/httpd-24@sha256:6b929971283d69f485a7d3e449fb5a3dd65d5a4de585c73419e776821d00062c - name: RELATED_IMAGE_BARBICAN_API_IMAGE_URL_DEFAULT value: quay.io/podified-antelope-centos9/openstack-barbican-api@sha256:fe32d3ea620f0c7ecfdde9bbf28417fde03bc18c6f60b1408fa8da24d8188f16 - name: RELATED_IMAGE_BARBICAN_KEYSTONE_LISTENER_IMAGE_URL_DEFAULT value: quay.io/podified-antelope-centos9/openstack-barbican-keystone-listener@sha256:c2ace235f775334be02d78928802b76309543e869cc6b4b55843ee546691e6c3 - name: RELATED_IMAGE_BARBICAN_WORKER_IMAGE_URL_DEFAULT value: quay.io/podified-antelope-centos9/openstack-barbican-worker@sha256:be77cc58b87f299b42bb2cbe74f3f8d028b8c887851a53209441b60e1363aeb5 - name: RELATED_IMAGE_CEILOMETER_CENTRAL_IMAGE_URL_DEFAULT value: quay.io/podified-antelope-centos9/openstack-ceilometer-central@sha256:5a548c25fe3d02f7a042cb0a6d28fc8039a34c4a3b3d07aadda4aba3a926e777 - name: RELATED_IMAGE_CEILOMETER_COMPUTE_IMAGE_URL_DEFAULT value: quay.io/podified-antelope-centos9/openstack-ceilometer-compute@sha256:41dc9cf27a902d9c7b392d730bd761cf3c391a548a841e9e4d38e1571f3c53bf - name: RELATED_IMAGE_CEILOMETER_IPMI_IMAGE_URL_DEFAULT value: quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi@sha256:174f8f712eb5fdda5061a1a68624befb27bbe766842653788583ec74c5ae506a - name: RELATED_IMAGE_CEILOMETER_MYSQLD_EXPORTER_IMAGE_URL_DEFAULT value: quay.io/prometheus/mysqld-exporter@sha256:7211a617ec657701ca819aa0ba28e1d5750f5bf2c1391b755cc4a48cc360b0fa - name: RELATED_IMAGE_CEILOMETER_NOTIFICATION_IMAGE_URL_DEFAULT value: quay.io/podified-antelope-centos9/openstack-ceilometer-notification@sha256:df14f6de785b8aefc38ceb5b47088405224cfa914977c9ab811514cc77b08a67 - name: RELATED_IMAGE_CEILOMETER_SGCORE_IMAGE_URL_DEFAULT value: quay.io/openstack-k8s-operators/sg-core@sha256:09b5017c95d7697e66b9c64846bc48ef5826a009cba89b956ec54561e5f4a2d1 - name: RELATED_IMAGE_CINDER_API_IMAGE_URL_DEFAULT value: quay.io/podified-antelope-centos9/openstack-cinder-api@sha256:b59b7445e581cc720038107e421371c86c5765b2967e77d884ef29b1d9fd0f49 - name: RELATED_IMAGE_CINDER_BACKUP_IMAGE_URL_DEFAULT value: quay.io/podified-antelope-centos9/openstack-cinder-backup@sha256:b8d76f96b6f17a3318d089c0b5c0e6c292d969ab392cdcc708ec0f0188c953ae - name: RELATED_IMAGE_CINDER_SCHEDULER_IMAGE_URL_DEFAULT value: quay.io/podified-antelope-centos9/openstack-cinder-scheduler@sha256:43c55407c7c9b4141482533546e6570535373f7e36df374dfbbe388293c19dbf - name: RELATED_IMAGE_CINDER_VOLUME_IMAGE_URL_DEFAULT value: quay.io/podified-antelope-centos9/openstack-cinder-volume@sha256:097816f289af117f14cd8ee1678a9635e8da6de4a1bde834d02199c4ef65c5c0 - name: RELATED_IMAGE_CLOUDKITTY_API_IMAGE_URL_DEFAULT value: quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api@sha256:744c4b41194e2cb21e83147626d64fd72438a72d51bb32c3ad90cf1f9711fed1 - name: RELATED_IMAGE_CLOUDKITTY_PROC_IMAGE_URL_DEFAULT value: quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-processor@sha256:c980be07bda5796425ea2d727826efb48caf3927a425751d5609915a7f68e87e - name: RELATED_IMAGE_DESIGNATE_API_IMAGE_URL_DEFAULT value: quay.io/podified-antelope-centos9/openstack-designate-api@sha256:281668af8ed34c2464f3593d350cf7b695b41b81f40cc539ad74b7b65822afb9 - name: RELATED_IMAGE_DESIGNATE_BACKENDBIND9_IMAGE_URL_DEFAULT value: quay.io/podified-antelope-centos9/openstack-designate-backend-bind9@sha256:84319e5dd6569ea531e64b688557c2a2e20deb5225f3d349e402e34858f00fe7 - name: RELATED_IMAGE_DESIGNATE_CENTRAL_IMAGE_URL_DEFAULT value: quay.io/podified-antelope-centos9/openstack-designate-central@sha256:acb53e0e210562091843c212bc0cf5541daacd6f2bd18923430bae8c36578731 - name: RELATED_IMAGE_DESIGNATE_MDNS_IMAGE_URL_DEFAULT value: quay.io/podified-antelope-centos9/openstack-designate-mdns@sha256:be6f4002842ebadf30d035721567a7e669f12a6eef8c00dc89030b3b08f3dd2c - name: RELATED_IMAGE_DESIGNATE_PRODUCER_IMAGE_URL_DEFAULT value: quay.io/podified-antelope-centos9/openstack-designate-producer@sha256:988635be61f6ed8c0d707622193b7efe8e9b1dc7effbf9b09d2db5ec593b59e7 - name: RELATED_IMAGE_DESIGNATE_UNBOUND_IMAGE_URL_DEFAULT value: quay.io/podified-antelope-centos9/openstack-unbound@sha256:63e08752678a68571e1c54ceea42c113af493a04cdc22198a3713df7b53f87e5 - name: RELATED_IMAGE_DESIGNATE_WORKER_IMAGE_URL_DEFAULT value: quay.io/podified-antelope-centos9/openstack-designate-worker@sha256:6741d06b0f1bbeb2968807dc5be45853cdd3dfb9cc7ea6ef23e909ae24f3cbf4 - name: RELATED_IMAGE_EDPM_FRR_IMAGE_URL_DEFAULT value: quay.io/podified-antelope-centos9/openstack-frr@sha256:1803a36d1a397a5595dddb4a2f791ab9443d3af97391a53928fa495ca7032d93 - name: RELATED_IMAGE_EDPM_ISCSID_IMAGE_URL_DEFAULT value: quay.io/podified-antelope-centos9/openstack-iscsid@sha256:d163fcf801d67d9c67b2ae4368675b75714db7c531de842aad43979a888c5d57 - name: RELATED_IMAGE_EDPM_KEPLER_IMAGE_URL_DEFAULT value: quay.io/sustainable_computing_io/kepler@sha256:581b65b646301e0fcb07582150ba63438f1353a85bf9acf1eb2acb4ce71c58bd - name: RELATED_IMAGE_EDPM_LOGROTATE_CROND_IMAGE_URL_DEFAULT value: quay.io/podified-antelope-centos9/openstack-cron@sha256:15bf81d933a44128cb6f3264632a9563337eb3bfe82c4a33c746595467d3b0c3 - name: RELATED_IMAGE_EDPM_MULTIPATHD_IMAGE_URL_DEFAULT value: quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f - name: RELATED_IMAGE_EDPM_NEUTRON_DHCP_AGENT_IMAGE_URL_DEFAULT value: quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent@sha256:3a08e21338f651a90ee83ae46242b8c80c64488144f27a77848517049c3a8f5d - name: RELATED_IMAGE_EDPM_NEUTRON_METADATA_AGENT_IMAGE_URL_DEFAULT value: quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2 - name: RELATED_IMAGE_EDPM_NEUTRON_OVN_AGENT_IMAGE_URL_DEFAULT value: quay.io/podified-antelope-centos9/openstack-neutron-ovn-agent@sha256:ebeb4443ab9f9360925f7abd9c24b7a453390d678f79ed247d2042dcc6f9c3fc - name: RELATED_IMAGE_EDPM_NEUTRON_SRIOV_AGENT_IMAGE_URL_DEFAULT value: quay.io/podified-antelope-centos9/openstack-neutron-sriov-agent@sha256:04bb4cd601b08034c6cba18e701fcd36026ec4340402ed710a0bbd09d8e4884d - name: RELATED_IMAGE_EDPM_NODE_EXPORTER_IMAGE_URL_DEFAULT value: quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c - name: RELATED_IMAGE_EDPM_OVN_BGP_AGENT_IMAGE_URL_DEFAULT value: quay.io/podified-antelope-centos9/openstack-ovn-bgp-agent@sha256:27b80783b7d4658d89dda9a09924e9ee472908a8fa1c86bcf3f773d17a4196e0 - name: RELATED_IMAGE_EDPM_PODMAN_EXPORTER_IMAGE_URL_DEFAULT value: quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd - name: RELATED_IMAGE_GLANCE_API_IMAGE_URL_DEFAULT value: quay.io/podified-antelope-centos9/openstack-glance-api@sha256:e4aa4ebbb1e581a12040e9ad2ae2709ac31b5d965bb64fc4252d1028b05c565f - name: RELATED_IMAGE_HEAT_API_IMAGE_URL_DEFAULT value: quay.io/podified-antelope-centos9/openstack-heat-api@sha256:8cb133c5a5551e1aa11ef3326149db1babbf00924d0ff493ebe3346b69fd4b5b - name: RELATED_IMAGE_HEAT_CFNAPI_IMAGE_URL_DEFAULT value: quay.io/podified-antelope-centos9/openstack-heat-api-cfn@sha256:13c3567176bb2d033f6c6b30e20404bd67a217e2537210bf222f3afe0c8619b7 - name: RELATED_IMAGE_HEAT_ENGINE_IMAGE_URL_DEFAULT value: quay.io/podified-antelope-centos9/openstack-heat-engine@sha256:60ac3446d57f1a97a6ca2d8e6584b00aa18704bc2707a7ac1a6a28c6d685d215 - name: RELATED_IMAGE_HORIZON_IMAGE_URL_DEFAULT value: quay.io/podified-antelope-centos9/openstack-horizon@sha256:dd7600bc5278c663cfcfecafd3fb051a2cd2ddc3c1efb07738bf09512aa23ae7 - name: RELATED_IMAGE_INFRA_MEMCACHED_IMAGE_URL_DEFAULT value: quay.io/podified-antelope-centos9/openstack-memcached@sha256:e47191ba776414b781b3e27b856ab45a03b9480c7dc2b1addb939608794882dc - name: RELATED_IMAGE_INFRA_REDIS_IMAGE_URL_DEFAULT value: quay.io/podified-antelope-centos9/openstack-redis@sha256:7e7788d1aae251e60f4012870140c65bce9760cd27feaeec5f65c42fe4ffce77 - name: RELATED_IMAGE_IRONIC_API_IMAGE_URL_DEFAULT value: quay.io/podified-antelope-centos9/openstack-ironic-api@sha256:6a401117007514660c694248adce8136d83559caf1b38e475935335e09ac954a - name: RELATED_IMAGE_IRONIC_CONDUCTOR_IMAGE_URL_DEFAULT value: quay.io/podified-antelope-centos9/openstack-ironic-conductor@sha256:364d50f873551805782c23264570eff40e3807f35d9bccdd456515b4e31da488 - name: RELATED_IMAGE_IRONIC_INSPECTOR_IMAGE_URL_DEFAULT value: quay.io/podified-antelope-centos9/openstack-ironic-inspector@sha256:2d72dd490576e0cb670d21a08420888f3758d64ed0cbd2ef8b9aa8488ad2ce40 - name: RELATED_IMAGE_IRONIC_NEUTRON_AGENT_IMAGE_URL_DEFAULT value: quay.io/podified-antelope-centos9/openstack-ironic-neutron-agent@sha256:96fdf7cddf31509ee63950a9d61320d0b01beb1212e28f37a6e872d6589ded22 - name: RELATED_IMAGE_IRONIC_PXE_IMAGE_URL_DEFAULT value: quay.io/podified-antelope-centos9/openstack-ironic-pxe@sha256:8b7534a2999075f919fc162d21f76026e8bf781913cc3d2ac07e484e9b2fc596 - name: RELATED_IMAGE_IRONIC_PYTHON_AGENT_IMAGE_URL_DEFAULT value: quay.io/podified-antelope-centos9/ironic-python-agent@sha256:d65eaaea2ab02d63af9d8a106619908fa01a2e56bd6753edc5590e66e46270db - name: RELATED_IMAGE_KEYSTONE_API_IMAGE_URL_DEFAULT value: quay.io/podified-antelope-centos9/openstack-keystone@sha256:d042d7f91bafb002affff8cf750d694a0da129377255c502028528fe2280e790 - name: RELATED_IMAGE_KSM_IMAGE_URL_DEFAULT value: registry.k8s.io/kube-state-metrics/kube-state-metrics@sha256:db384bf43222b066c378e77027a675d4cd9911107adba46c2922b3a55e10d6fb - name: RELATED_IMAGE_MANILA_API_IMAGE_URL_DEFAULT value: quay.io/podified-antelope-centos9/openstack-manila-api@sha256:a8faef9ea5e8ef8327b7fbb9b9cafc74c38c09c7e3b2365a7cad5eb49766f71d - name: RELATED_IMAGE_MANILA_SCHEDULER_IMAGE_URL_DEFAULT value: quay.io/podified-antelope-centos9/openstack-manila-scheduler@sha256:88aa46ea03a5584560806aa4b093584fda6b2f54c562005b72be2e3615688090 - name: RELATED_IMAGE_MANILA_SHARE_IMAGE_URL_DEFAULT value: quay.io/podified-antelope-centos9/openstack-manila-share@sha256:c08ecdfb7638c1897004347d835bdbabacff40a345f64c2b3111c377096bfa56 - name: RELATED_IMAGE_MARIADB_IMAGE_URL_DEFAULT value: quay.io/podified-antelope-centos9/openstack-mariadb@sha256:ed0f8ba03f3ce47a32006d730c3049455325eb2c3b98b9fd6b3fb9901004df13 - name: RELATED_IMAGE_NET_UTILS_IMAGE_URL_DEFAULT value: quay.io/podified-antelope-centos9/openstack-netutils@sha256:8b4025a4f30e83acc0b51ac063eea701006a302a1acbdec53f54b540270887f7 - name: RELATED_IMAGE_NEUTRON_API_IMAGE_URL_DEFAULT value: quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:ea0bf67f1aa5d95a9a07b9c8692c293470f1311792c55d3d57f1f92e56689c33 - name: RELATED_IMAGE_NOVA_API_IMAGE_URL_DEFAULT value: quay.io/podified-antelope-centos9/openstack-nova-api@sha256:4992f5ddbd20cca07e750846b2dbe7c51c5766c3002c388f8d8a158e347ec63d - name: RELATED_IMAGE_NOVA_COMPUTE_IMAGE_URL_DEFAULT value: quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:526afed30c44ef41d54d63a4f4db122bc603f775243ae350a59d2e0b5050076b - name: RELATED_IMAGE_NOVA_CONDUCTOR_IMAGE_URL_DEFAULT value: quay.io/podified-antelope-centos9/openstack-nova-conductor@sha256:22f097cb86b28ac48dc670ed7e0e841280bef1608f11b2b4536fbc2d2a6a90be - name: RELATED_IMAGE_NOVA_NOVNC_IMAGE_URL_DEFAULT value: quay.io/podified-antelope-centos9/openstack-nova-novncproxy@sha256:20b3ad38accb9eb8849599280a263d3436a5af03d89645e5ec4508586297ffde - name: RELATED_IMAGE_NOVA_SCHEDULER_IMAGE_URL_DEFAULT value: quay.io/podified-antelope-centos9/openstack-nova-scheduler@sha256:378ed518b68ea809cffa2ff7a93d51e52cfc53af14eedc978924fdabccef0325 - name: RELATED_IMAGE_OCTAVIA_API_IMAGE_URL_DEFAULT value: quay.io/podified-antelope-centos9/openstack-octavia-api@sha256:8c3632033f8c004f31a1c7c57c5ca7b450a11e9170a220b8943b57f80717c70c - name: RELATED_IMAGE_OCTAVIA_HEALTHMANAGER_IMAGE_URL_DEFAULT value: quay.io/podified-antelope-centos9/openstack-octavia-health-manager@sha256:3f746f7c6a8c48c0f4a800dcb4bc49bfbc4de4a9ca6a55d8f22bc515a92ea1d9 - name: RELATED_IMAGE_OCTAVIA_HOUSEKEEPING_IMAGE_URL_DEFAULT value: quay.io/podified-antelope-centos9/openstack-octavia-housekeeping@sha256:e1f7bf105190c3cbbfcf0aeeb77a92d1466100ba8377221ed5eee228949e05bd - name: RELATED_IMAGE_OCTAVIA_RSYSLOG_IMAGE_URL_DEFAULT value: quay.io/podified-antelope-centos9/openstack-rsyslog@sha256:954b4c60705b229a968aba3b5b35ab02759378706103ed1189fae3e3316fac35 - name: RELATED_IMAGE_OCTAVIA_WORKER_IMAGE_URL_DEFAULT value: quay.io/podified-antelope-centos9/openstack-octavia-worker@sha256:f2e0025727efb95efa65e6af6338ae3fc79bf61095d6d54931a0be8d7fe9acac - name: RELATED_IMAGE_OPENSTACK_CLIENT_IMAGE_URL_DEFAULT value: quay.io/podified-antelope-centos9/openstack-openstackclient@sha256:2b4f8494513a3af102066fec5868ab167ac8664aceb2f0c639d7a0b60260a944 - name: RELATED_IMAGE_OPENSTACK_MUST_GATHER_DEFAULT value: quay.io/openstack-k8s-operators/openstack-must-gather@sha256:854a802357b4f565a366fce3bf29b20c1b768ec4ab7e822ef52dfc2fef000d2c - name: RELATED_IMAGE_OPENSTACK_NETWORK_EXPORTER_IMAGE_URL_DEFAULT value: quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7 - name: RELATED_IMAGE_OS_CONTAINER_IMAGE_URL_DEFAULT value: quay.io/podified-antelope-centos9/edpm-hardened-uefi@sha256:194121c2d79401bd41f75428a437fe32a5806a6a160f7d80798ff66baed9afa5 - name: RELATED_IMAGE_OVN_CONTROLLER_IMAGE_URL_DEFAULT value: quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de - name: RELATED_IMAGE_OVN_CONTROLLER_OVS_IMAGE_URL_DEFAULT value: quay.io/podified-antelope-centos9/openstack-ovn-base@sha256:df45459c449f64cc6471e98c0890ac00dcc77a940f85d4e7e9d9dd52990d65b3 - name: RELATED_IMAGE_OVN_NB_DBCLUSTER_IMAGE_URL_DEFAULT value: quay.io/podified-antelope-centos9/openstack-ovn-nb-db-server@sha256:947c1bb9373b7d3f2acea104a5666e394c830111bf80d133f1fe7238e4d06f28 - name: RELATED_IMAGE_OVN_NORTHD_IMAGE_URL_DEFAULT value: quay.io/podified-antelope-centos9/openstack-ovn-northd@sha256:425ebddc9d6851ee9c730e67eaf43039943dc7937fb11332a41335a9114b2d44 - name: RELATED_IMAGE_OVN_SB_DBCLUSTER_IMAGE_URL_DEFAULT value: quay.io/podified-antelope-centos9/openstack-ovn-sb-db-server@sha256:bea03c7c34dc6ef8bc163e12a8940011b8feebc44a2efaaba2d3c4c6c515d6c8 - name: RELATED_IMAGE_PLACEMENT_API_IMAGE_URL_DEFAULT value: quay.io/podified-antelope-centos9/openstack-placement-api@sha256:33f4e5f7a715d48482ec46a42267ea992fa268585303c4f1bd3cbea072a6348b - name: RELATED_IMAGE_RABBITMQ_IMAGE_URL_DEFAULT value: quay.io/podified-antelope-centos9/openstack-rabbitmq@sha256:e733252aab7f4bc0efbdd712bcd88e44c5498bf1773dba843bc9dcfac324fe3d - name: RELATED_IMAGE_SWIFT_ACCOUNT_IMAGE_URL_DEFAULT value: quay.io/podified-antelope-centos9/openstack-swift-account@sha256:a2280bc80b454dc9e5c95daf74b8a53d6f9e42fc16d45287e089fc41014fe1da - name: RELATED_IMAGE_SWIFT_CONTAINER_IMAGE_URL_DEFAULT value: quay.io/podified-antelope-centos9/openstack-swift-container@sha256:88d687a7bb593b2e61598b422baba84d67c114419590a6d83d15327d119ce208 - name: RELATED_IMAGE_SWIFT_OBJECT_IMAGE_URL_DEFAULT value: quay.io/podified-antelope-centos9/openstack-swift-object@sha256:2635e02b99d380b2e547013c09c6c8da01bc89b3d3ce570e4d8f8656c7635b0e - name: RELATED_IMAGE_SWIFT_PROXY_IMAGE_URL_DEFAULT value: quay.io/podified-antelope-centos9/openstack-swift-proxy-server@sha256:ac7fefe1c93839c7ccb2aaa0a18751df0e9f64a36a3b4cc1b81d82d7774b8b45 - name: RELATED_IMAGE_TEST_TEMPEST_IMAGE_URL_DEFAULT value: quay.io/podified-antelope-centos9/openstack-tempest-all@sha256:a357cf166caaeea230f8a912aceb042e3170c5d680844e8f97b936baa10834ed - name: RELATED_IMAGE_WATCHER_API_IMAGE_URL_DEFAULT value: quay.io/podified-master-centos9/openstack-watcher-api@sha256:bf2a07cbf4aec8e8283e14fb134605b15a61db6d3f7965a5e2e3cac66018c73a - name: RELATED_IMAGE_WATCHER_APPLIER_IMAGE_URL_DEFAULT value: quay.io/podified-master-centos9/openstack-watcher-applier@sha256:10a8ff59cb8b91189b60c6f28155b62cbe2983fb14c053d74967d219c4f8b2af - name: RELATED_IMAGE_WATCHER_DECISION_ENGINE_IMAGE_URL_DEFAULT value: quay.io/podified-master-centos9/openstack-watcher-decision-engine@sha256:4466fc51f6461209d9a75e53f13a88171143fe5977797a02406b57f32ffaf0ab image: quay.io/openstack-k8s-operators/openstack-operator@sha256:e2fbc2e7072eb824d265ecca0bc2eb120464a917d1473445d33f02c97487ea39 imagePullPolicy: IfNotPresent livenessProbe: failureThreshold: 3 httpGet: path: /healthz port: 8081 scheme: HTTP initialDelaySeconds: 15 periodSeconds: 20 successThreshold: 1 timeoutSeconds: 1 name: manager ports: - containerPort: 9443 name: webhook-server protocol: TCP readinessProbe: failureThreshold: 3 httpGet: path: /readyz port: 8081 scheme: HTTP initialDelaySeconds: 5 periodSeconds: 10 successThreshold: 1 timeoutSeconds: 1 resources: limits: cpu: 500m memory: 1Gi requests: cpu: 10m memory: 512Mi securityContext: allowPrivilegeEscalation: false capabilities: drop: - ALL runAsUser: 1000660000 terminationMessagePath: /dev/termination-log terminationMessagePolicy: File volumeMounts: - mountPath: /tmp/k8s-metrics-server/metrics-certs name: metrics-certs readOnly: true - mountPath: /tmp/k8s-webhook-server/serving-certs name: webhook-certs readOnly: true - mountPath: /var/run/secrets/kubernetes.io/serviceaccount name: kube-api-access-vbd5c readOnly: true dnsPolicy: ClusterFirst enableServiceLinks: true imagePullSecrets: - name: openstack-operator-controller-manager-dockercfg-d7t4k nodeName: crc preemptionPolicy: PreemptLowerPriority priority: 0 restartPolicy: Always schedulerName: default-scheduler securityContext: fsGroup: 1000660000 runAsNonRoot: true seLinuxOptions: level: s0:c26,c5 seccompProfile: type: RuntimeDefault serviceAccount: openstack-operator-controller-manager serviceAccountName: openstack-operator-controller-manager terminationGracePeriodSeconds: 10 tolerations: - effect: NoExecute key: node.kubernetes.io/not-ready operator: Exists tolerationSeconds: 300 - effect: NoExecute key: node.kubernetes.io/unreachable operator: Exists tolerationSeconds: 300 - effect: NoSchedule key: node.kubernetes.io/memory-pressure operator: Exists volumes: - name: metrics-certs secret: defaultMode: 420 items: - key: ca.crt path: ca.crt - key: tls.crt path: tls.crt - key: tls.key path: tls.key optional: false secretName: metrics-server-cert - name: webhook-certs secret: defaultMode: 420 secretName: webhook-server-cert - name: kube-api-access-vbd5c projected: defaultMode: 420 sources: - serviceAccountToken: expirationSeconds: 3607 path: token - configMap: items: - key: ca.crt path: ca.crt name: kube-root-ca.crt - downwardAPI: items: - fieldRef: apiVersion: v1 fieldPath: metadata.namespace path: namespace - configMap: items: - key: service-ca.crt path: service-ca.crt name: openshift-service-ca.crt status: conditions: - lastProbeTime: null lastTransitionTime: '2025-12-13T06:59:49Z' status: 'True' type: PodReadyToStartContainers - lastProbeTime: null lastTransitionTime: '2025-12-13T06:59:15Z' status: 'True' type: Initialized - lastProbeTime: null lastTransitionTime: '2025-12-13T06:59:58Z' status: 'True' type: Ready - lastProbeTime: null lastTransitionTime: '2025-12-13T06:59:58Z' status: 'True' type: ContainersReady - lastProbeTime: null lastTransitionTime: '2025-12-13T06:59:15Z' status: 'True' type: PodScheduled containerStatuses: - containerID: cri-o://cc1e97875227a67ff39f234f493c7d589cd93eaa6f40eff1de511e619f2e7637 image: quay.io/openstack-k8s-operators/openstack-operator@sha256:e2fbc2e7072eb824d265ecca0bc2eb120464a917d1473445d33f02c97487ea39 imageID: quay.io/openstack-k8s-operators/openstack-operator@sha256:e2fbc2e7072eb824d265ecca0bc2eb120464a917d1473445d33f02c97487ea39 lastState: {} name: manager ready: true restartCount: 0 started: true state: running: startedAt: '2025-12-13T06:59:48Z' volumeMounts: - mountPath: /tmp/k8s-metrics-server/metrics-certs name: metrics-certs readOnly: true recursiveReadOnly: Disabled - mountPath: /tmp/k8s-webhook-server/serving-certs name: webhook-certs readOnly: true recursiveReadOnly: Disabled - mountPath: /var/run/secrets/kubernetes.io/serviceaccount name: kube-api-access-vbd5c readOnly: true recursiveReadOnly: Disabled hostIP: 192.168.126.11 hostIPs: - ip: 192.168.126.11 phase: Running podIP: 10.217.0.87 podIPs: - ip: 10.217.0.87 qosClass: Burstable startTime: '2025-12-13T06:59:15Z' - apiVersion: v1 kind: Pod metadata: annotations: alm-examples: |- [ { "apiVersion": "operator.openstack.org/v1beta1", "kind": "OpenStack", "metadata": { "labels": { "app.kubernetes.io/created-by": "openstack-operator", "app.kubernetes.io/instance": "openstack", "app.kubernetes.io/managed-by": "kustomize", "app.kubernetes.io/name": "openstack", "app.kubernetes.io/part-of": "openstack-operator" }, "name": "openstack", "namespace": "openstack-operators" }, "spec": { "operatorOverrides": [ { "controllerManager": { "resources": { "limits": { "cpu": "600m", "memory": "2Gi" } } }, "name": "infra", "replicas": 1 } ] } } ] capabilities: Seamless Upgrades createdAt: '2025-12-12T19:28:05Z' features.operators.openshift.io/disconnected: 'true' features.operators.openshift.io/fips-compliant: 'true' features.operators.openshift.io/proxy-aware: 'false' features.operators.openshift.io/tls-profiles: 'false' features.operators.openshift.io/token-auth-aws: 'false' features.operators.openshift.io/token-auth-azure: 'false' features.operators.openshift.io/token-auth-gcp: 'false' k8s.ovn.org/pod-networks: '{"default":{"ip_addresses":["10.217.0.252/23"],"mac_address":"0a:58:0a:d9:00:fc","gateway_ips":["10.217.0.1"],"routes":[{"dest":"10.217.0.0/22","nextHop":"10.217.0.1"},{"dest":"10.217.4.0/23","nextHop":"10.217.0.1"},{"dest":"169.254.0.5/32","nextHop":"10.217.0.1"},{"dest":"100.64.0.0/16","nextHop":"10.217.0.1"}],"ip_address":"10.217.0.252/23","gateway_ip":"10.217.0.1","role":"primary"}}' k8s.v1.cni.cncf.io/network-status: |- [{ "name": "ovn-kubernetes", "interface": "eth0", "ips": [ "10.217.0.252" ], "mac": "0a:58:0a:d9:00:fc", "default": true, "dns": {} }] kubectl.kubernetes.io/default-container: operator olm.operatorGroup: openstack olm.operatorNamespace: openstack-operators olm.targetNamespaces: '' openshift.io/scc: restricted-v2 operatorframework.io/initialization-resource: '{"apiVersion":"operator.openstack.org/v1beta1","kind":"OpenStack","metadata":{"name":"openstack","namespace":"openstack-operators"},"spec":{}}' operatorframework.io/properties: '{"properties":[{"type":"olm.gvk","value":{"group":"operator.openstack.org","kind":"OpenStack","version":"v1beta1"}},{"type":"olm.package","value":{"packageName":"openstack-operator","version":"0.5.0"}}]}' operatorframework.io/suggested-namespace: openstack-operators operators.openshift.io/valid-subscription: '["OpenShift Container Platform", "OpenShift Platform Plus"]' operators.operatorframework.io/builder: operator-sdk-v1.41.1 operators.operatorframework.io/project_layout: go.kubebuilder.io/v4 seccomp.security.alpha.kubernetes.io/pod: runtime/default creationTimestamp: '2025-12-13T07:31:57Z' generateName: openstack-operator-controller-operator-859586489- labels: app.kubernetes.io/name: openstack-operator-controller-operator control-plane: controller-manager openstack.org/operator-name: openstack-init pod-template-hash: '859586489' managedFields: - apiVersion: v1 fieldsType: FieldsV1 fieldsV1: f:metadata: f:annotations: f:k8s.ovn.org/pod-networks: {} manager: crc operation: Update subresource: status time: '2025-12-13T07:31:57Z' - apiVersion: v1 fieldsType: FieldsV1 fieldsV1: f:metadata: f:annotations: .: {} f:alm-examples: {} f:capabilities: {} f:createdAt: {} f:features.operators.openshift.io/disconnected: {} f:features.operators.openshift.io/fips-compliant: {} f:features.operators.openshift.io/proxy-aware: {} f:features.operators.openshift.io/tls-profiles: {} f:features.operators.openshift.io/token-auth-aws: {} f:features.operators.openshift.io/token-auth-azure: {} f:features.operators.openshift.io/token-auth-gcp: {} f:kubectl.kubernetes.io/default-container: {} f:olm.operatorGroup: {} f:olm.operatorNamespace: {} f:olm.targetNamespaces: {} f:operatorframework.io/initialization-resource: {} f:operatorframework.io/properties: {} f:operatorframework.io/suggested-namespace: {} f:operators.openshift.io/valid-subscription: {} f:operators.operatorframework.io/builder: {} f:operators.operatorframework.io/project_layout: {} f:generateName: {} f:labels: .: {} f:app.kubernetes.io/name: {} f:control-plane: {} f:openstack.org/operator-name: {} f:pod-template-hash: {} f:ownerReferences: .: {} k:{"uid":"0b339b78-adab-4cd2-8ec5-695412000d0b"}: {} f:spec: f:containers: k:{"name":"operator"}: .: {} f:args: {} f:command: {} f:env: .: {} k:{"name":"ENABLE_WEBHOOKS"}: .: {} f:name: {} f:value: {} k:{"name":"LEASE_DURATION"}: .: {} f:name: {} f:value: {} k:{"name":"OPENSTACK_RELEASE_VERSION"}: .: {} f:name: {} f:value: {} k:{"name":"OPERATOR_CONDITION_NAME"}: .: {} f:name: {} f:value: {} k:{"name":"OPERATOR_IMAGE_URL"}: .: {} f:name: {} f:value: {} k:{"name":"RELATED_IMAGE_AGENT_IMAGE_URL_DEFAULT"}: .: {} f:name: {} f:value: {} k:{"name":"RELATED_IMAGE_ANSIBLEEE_IMAGE_URL_DEFAULT"}: .: {} f:name: {} f:value: {} k:{"name":"RELATED_IMAGE_AODH_API_IMAGE_URL_DEFAULT"}: .: {} f:name: {} f:value: {} k:{"name":"RELATED_IMAGE_AODH_EVALUATOR_IMAGE_URL_DEFAULT"}: .: {} f:name: {} f:value: {} k:{"name":"RELATED_IMAGE_AODH_LISTENER_IMAGE_URL_DEFAULT"}: .: {} f:name: {} f:value: {} k:{"name":"RELATED_IMAGE_AODH_NOTIFIER_IMAGE_URL_DEFAULT"}: .: {} f:name: {} f:value: {} k:{"name":"RELATED_IMAGE_APACHE_IMAGE_URL_DEFAULT"}: .: {} f:name: {} f:value: {} k:{"name":"RELATED_IMAGE_BARBICAN_API_IMAGE_URL_DEFAULT"}: .: {} f:name: {} f:value: {} k:{"name":"RELATED_IMAGE_BARBICAN_KEYSTONE_LISTENER_IMAGE_URL_DEFAULT"}: .: {} f:name: {} f:value: {} k:{"name":"RELATED_IMAGE_BARBICAN_OPERATOR_MANAGER_IMAGE_URL"}: .: {} f:name: {} f:value: {} k:{"name":"RELATED_IMAGE_BARBICAN_WORKER_IMAGE_URL_DEFAULT"}: .: {} f:name: {} f:value: {} k:{"name":"RELATED_IMAGE_CEILOMETER_CENTRAL_IMAGE_URL_DEFAULT"}: .: {} f:name: {} f:value: {} k:{"name":"RELATED_IMAGE_CEILOMETER_COMPUTE_IMAGE_URL_DEFAULT"}: .: {} f:name: {} f:value: {} k:{"name":"RELATED_IMAGE_CEILOMETER_IPMI_IMAGE_URL_DEFAULT"}: .: {} f:name: {} f:value: {} k:{"name":"RELATED_IMAGE_CEILOMETER_MYSQLD_EXPORTER_IMAGE_URL_DEFAULT"}: .: {} f:name: {} f:value: {} k:{"name":"RELATED_IMAGE_CEILOMETER_NOTIFICATION_IMAGE_URL_DEFAULT"}: .: {} f:name: {} f:value: {} k:{"name":"RELATED_IMAGE_CEILOMETER_SGCORE_IMAGE_URL_DEFAULT"}: .: {} f:name: {} f:value: {} k:{"name":"RELATED_IMAGE_CINDER_API_IMAGE_URL_DEFAULT"}: .: {} f:name: {} f:value: {} k:{"name":"RELATED_IMAGE_CINDER_BACKUP_IMAGE_URL_DEFAULT"}: .: {} f:name: {} f:value: {} k:{"name":"RELATED_IMAGE_CINDER_OPERATOR_MANAGER_IMAGE_URL"}: .: {} f:name: {} f:value: {} k:{"name":"RELATED_IMAGE_CINDER_SCHEDULER_IMAGE_URL_DEFAULT"}: .: {} f:name: {} f:value: {} k:{"name":"RELATED_IMAGE_CINDER_VOLUME_IMAGE_URL_DEFAULT"}: .: {} f:name: {} f:value: {} k:{"name":"RELATED_IMAGE_CLOUDKITTY_API_IMAGE_URL_DEFAULT"}: .: {} f:name: {} f:value: {} k:{"name":"RELATED_IMAGE_CLOUDKITTY_PROC_IMAGE_URL_DEFAULT"}: .: {} f:name: {} f:value: {} k:{"name":"RELATED_IMAGE_DESIGNATE_API_IMAGE_URL_DEFAULT"}: .: {} f:name: {} f:value: {} k:{"name":"RELATED_IMAGE_DESIGNATE_BACKENDBIND9_IMAGE_URL_DEFAULT"}: .: {} f:name: {} f:value: {} k:{"name":"RELATED_IMAGE_DESIGNATE_CENTRAL_IMAGE_URL_DEFAULT"}: .: {} f:name: {} f:value: {} k:{"name":"RELATED_IMAGE_DESIGNATE_MDNS_IMAGE_URL_DEFAULT"}: .: {} f:name: {} f:value: {} k:{"name":"RELATED_IMAGE_DESIGNATE_OPERATOR_MANAGER_IMAGE_URL"}: .: {} f:name: {} f:value: {} k:{"name":"RELATED_IMAGE_DESIGNATE_PRODUCER_IMAGE_URL_DEFAULT"}: .: {} f:name: {} f:value: {} k:{"name":"RELATED_IMAGE_DESIGNATE_UNBOUND_IMAGE_URL_DEFAULT"}: .: {} f:name: {} f:value: {} k:{"name":"RELATED_IMAGE_DESIGNATE_WORKER_IMAGE_URL_DEFAULT"}: .: {} f:name: {} f:value: {} k:{"name":"RELATED_IMAGE_EDPM_FRR_IMAGE_URL_DEFAULT"}: .: {} f:name: {} f:value: {} k:{"name":"RELATED_IMAGE_EDPM_ISCSID_IMAGE_URL_DEFAULT"}: .: {} f:name: {} f:value: {} k:{"name":"RELATED_IMAGE_EDPM_KEPLER_IMAGE_URL_DEFAULT"}: .: {} f:name: {} f:value: {} k:{"name":"RELATED_IMAGE_EDPM_LOGROTATE_CROND_IMAGE_URL_DEFAULT"}: .: {} f:name: {} f:value: {} k:{"name":"RELATED_IMAGE_EDPM_MULTIPATHD_IMAGE_URL_DEFAULT"}: .: {} f:name: {} f:value: {} k:{"name":"RELATED_IMAGE_EDPM_NEUTRON_DHCP_AGENT_IMAGE_URL_DEFAULT"}: .: {} f:name: {} f:value: {} k:{"name":"RELATED_IMAGE_EDPM_NEUTRON_METADATA_AGENT_IMAGE_URL_DEFAULT"}: .: {} f:name: {} f:value: {} k:{"name":"RELATED_IMAGE_EDPM_NEUTRON_OVN_AGENT_IMAGE_URL_DEFAULT"}: .: {} f:name: {} f:value: {} k:{"name":"RELATED_IMAGE_EDPM_NEUTRON_SRIOV_AGENT_IMAGE_URL_DEFAULT"}: .: {} f:name: {} f:value: {} k:{"name":"RELATED_IMAGE_EDPM_NODE_EXPORTER_IMAGE_URL_DEFAULT"}: .: {} f:name: {} f:value: {} k:{"name":"RELATED_IMAGE_EDPM_OVN_BGP_AGENT_IMAGE_URL_DEFAULT"}: .: {} f:name: {} f:value: {} k:{"name":"RELATED_IMAGE_EDPM_PODMAN_EXPORTER_IMAGE_URL_DEFAULT"}: .: {} f:name: {} f:value: {} k:{"name":"RELATED_IMAGE_GLANCE_API_IMAGE_URL_DEFAULT"}: .: {} f:name: {} f:value: {} k:{"name":"RELATED_IMAGE_GLANCE_OPERATOR_MANAGER_IMAGE_URL"}: .: {} f:name: {} f:value: {} k:{"name":"RELATED_IMAGE_HEAT_API_IMAGE_URL_DEFAULT"}: .: {} f:name: {} f:value: {} k:{"name":"RELATED_IMAGE_HEAT_CFNAPI_IMAGE_URL_DEFAULT"}: .: {} f:name: {} f:value: {} k:{"name":"RELATED_IMAGE_HEAT_ENGINE_IMAGE_URL_DEFAULT"}: .: {} f:name: {} f:value: {} k:{"name":"RELATED_IMAGE_HEAT_OPERATOR_MANAGER_IMAGE_URL"}: .: {} f:name: {} f:value: {} k:{"name":"RELATED_IMAGE_HORIZON_IMAGE_URL_DEFAULT"}: .: {} f:name: {} f:value: {} k:{"name":"RELATED_IMAGE_HORIZON_OPERATOR_MANAGER_IMAGE_URL"}: .: {} f:name: {} f:value: {} k:{"name":"RELATED_IMAGE_INFRA_MEMCACHED_IMAGE_URL_DEFAULT"}: .: {} f:name: {} f:value: {} k:{"name":"RELATED_IMAGE_INFRA_OPERATOR_MANAGER_IMAGE_URL"}: .: {} f:name: {} f:value: {} k:{"name":"RELATED_IMAGE_INFRA_REDIS_IMAGE_URL_DEFAULT"}: .: {} f:name: {} f:value: {} k:{"name":"RELATED_IMAGE_IRONIC_API_IMAGE_URL_DEFAULT"}: .: {} f:name: {} f:value: {} k:{"name":"RELATED_IMAGE_IRONIC_CONDUCTOR_IMAGE_URL_DEFAULT"}: .: {} f:name: {} f:value: {} k:{"name":"RELATED_IMAGE_IRONIC_INSPECTOR_IMAGE_URL_DEFAULT"}: .: {} f:name: {} f:value: {} k:{"name":"RELATED_IMAGE_IRONIC_NEUTRON_AGENT_IMAGE_URL_DEFAULT"}: .: {} f:name: {} f:value: {} k:{"name":"RELATED_IMAGE_IRONIC_OPERATOR_MANAGER_IMAGE_URL"}: .: {} f:name: {} f:value: {} k:{"name":"RELATED_IMAGE_IRONIC_PXE_IMAGE_URL_DEFAULT"}: .: {} f:name: {} f:value: {} k:{"name":"RELATED_IMAGE_IRONIC_PYTHON_AGENT_IMAGE_URL_DEFAULT"}: .: {} f:name: {} f:value: {} k:{"name":"RELATED_IMAGE_KEYSTONE_API_IMAGE_URL_DEFAULT"}: .: {} f:name: {} f:value: {} k:{"name":"RELATED_IMAGE_KEYSTONE_OPERATOR_MANAGER_IMAGE_URL"}: .: {} f:name: {} f:value: {} k:{"name":"RELATED_IMAGE_KSM_IMAGE_URL_DEFAULT"}: .: {} f:name: {} f:value: {} k:{"name":"RELATED_IMAGE_MANILA_API_IMAGE_URL_DEFAULT"}: .: {} f:name: {} f:value: {} k:{"name":"RELATED_IMAGE_MANILA_OPERATOR_MANAGER_IMAGE_URL"}: .: {} f:name: {} f:value: {} k:{"name":"RELATED_IMAGE_MANILA_SCHEDULER_IMAGE_URL_DEFAULT"}: .: {} f:name: {} f:value: {} k:{"name":"RELATED_IMAGE_MANILA_SHARE_IMAGE_URL_DEFAULT"}: .: {} f:name: {} f:value: {} k:{"name":"RELATED_IMAGE_MARIADB_IMAGE_URL_DEFAULT"}: .: {} f:name: {} f:value: {} k:{"name":"RELATED_IMAGE_MARIADB_OPERATOR_MANAGER_IMAGE_URL"}: .: {} f:name: {} f:value: {} k:{"name":"RELATED_IMAGE_NET_UTILS_IMAGE_URL_DEFAULT"}: .: {} f:name: {} f:value: {} k:{"name":"RELATED_IMAGE_NEUTRON_API_IMAGE_URL_DEFAULT"}: .: {} f:name: {} f:value: {} k:{"name":"RELATED_IMAGE_NEUTRON_OPERATOR_MANAGER_IMAGE_URL"}: .: {} f:name: {} f:value: {} k:{"name":"RELATED_IMAGE_NOVA_API_IMAGE_URL_DEFAULT"}: .: {} f:name: {} f:value: {} k:{"name":"RELATED_IMAGE_NOVA_COMPUTE_IMAGE_URL_DEFAULT"}: .: {} f:name: {} f:value: {} k:{"name":"RELATED_IMAGE_NOVA_CONDUCTOR_IMAGE_URL_DEFAULT"}: .: {} f:name: {} f:value: {} k:{"name":"RELATED_IMAGE_NOVA_NOVNC_IMAGE_URL_DEFAULT"}: .: {} f:name: {} f:value: {} k:{"name":"RELATED_IMAGE_NOVA_OPERATOR_MANAGER_IMAGE_URL"}: .: {} f:name: {} f:value: {} k:{"name":"RELATED_IMAGE_NOVA_SCHEDULER_IMAGE_URL_DEFAULT"}: .: {} f:name: {} f:value: {} k:{"name":"RELATED_IMAGE_OCTAVIA_API_IMAGE_URL_DEFAULT"}: .: {} f:name: {} f:value: {} k:{"name":"RELATED_IMAGE_OCTAVIA_HEALTHMANAGER_IMAGE_URL_DEFAULT"}: .: {} f:name: {} f:value: {} k:{"name":"RELATED_IMAGE_OCTAVIA_HOUSEKEEPING_IMAGE_URL_DEFAULT"}: .: {} f:name: {} f:value: {} k:{"name":"RELATED_IMAGE_OCTAVIA_OPERATOR_MANAGER_IMAGE_URL"}: .: {} f:name: {} f:value: {} k:{"name":"RELATED_IMAGE_OCTAVIA_RSYSLOG_IMAGE_URL_DEFAULT"}: .: {} f:name: {} f:value: {} k:{"name":"RELATED_IMAGE_OCTAVIA_WORKER_IMAGE_URL_DEFAULT"}: .: {} f:name: {} f:value: {} k:{"name":"RELATED_IMAGE_OPENSTACK_BAREMETAL_OPERATOR_MANAGER_IMAGE_URL"}: .: {} f:name: {} f:value: {} k:{"name":"RELATED_IMAGE_OPENSTACK_CLIENT_IMAGE_URL_DEFAULT"}: .: {} f:name: {} f:value: {} k:{"name":"RELATED_IMAGE_OPENSTACK_MUST_GATHER_DEFAULT"}: .: {} f:name: {} f:value: {} k:{"name":"RELATED_IMAGE_OPENSTACK_NETWORK_EXPORTER_IMAGE_URL_DEFAULT"}: .: {} f:name: {} f:value: {} k:{"name":"RELATED_IMAGE_OS_CONTAINER_IMAGE_URL_DEFAULT"}: .: {} f:name: {} f:value: {} k:{"name":"RELATED_IMAGE_OVN_CONTROLLER_IMAGE_URL_DEFAULT"}: .: {} f:name: {} f:value: {} k:{"name":"RELATED_IMAGE_OVN_CONTROLLER_OVS_IMAGE_URL_DEFAULT"}: .: {} f:name: {} f:value: {} k:{"name":"RELATED_IMAGE_OVN_NB_DBCLUSTER_IMAGE_URL_DEFAULT"}: .: {} f:name: {} f:value: {} k:{"name":"RELATED_IMAGE_OVN_NORTHD_IMAGE_URL_DEFAULT"}: .: {} f:name: {} f:value: {} k:{"name":"RELATED_IMAGE_OVN_OPERATOR_MANAGER_IMAGE_URL"}: .: {} f:name: {} f:value: {} k:{"name":"RELATED_IMAGE_OVN_SB_DBCLUSTER_IMAGE_URL_DEFAULT"}: .: {} f:name: {} f:value: {} k:{"name":"RELATED_IMAGE_PLACEMENT_API_IMAGE_URL_DEFAULT"}: .: {} f:name: {} f:value: {} k:{"name":"RELATED_IMAGE_PLACEMENT_OPERATOR_MANAGER_IMAGE_URL"}: .: {} f:name: {} f:value: {} k:{"name":"RELATED_IMAGE_RABBITMQ_CLUSTER_OPERATOR_MANAGER_IMAGE_URL"}: .: {} f:name: {} f:value: {} k:{"name":"RELATED_IMAGE_RABBITMQ_IMAGE_URL_DEFAULT"}: .: {} f:name: {} f:value: {} k:{"name":"RELATED_IMAGE_SWIFT_ACCOUNT_IMAGE_URL_DEFAULT"}: .: {} f:name: {} f:value: {} k:{"name":"RELATED_IMAGE_SWIFT_CONTAINER_IMAGE_URL_DEFAULT"}: .: {} f:name: {} f:value: {} k:{"name":"RELATED_IMAGE_SWIFT_OBJECT_IMAGE_URL_DEFAULT"}: .: {} f:name: {} f:value: {} k:{"name":"RELATED_IMAGE_SWIFT_OPERATOR_MANAGER_IMAGE_URL"}: .: {} f:name: {} f:value: {} k:{"name":"RELATED_IMAGE_SWIFT_PROXY_IMAGE_URL_DEFAULT"}: .: {} f:name: {} f:value: {} k:{"name":"RELATED_IMAGE_TELEMETRY_OPERATOR_MANAGER_IMAGE_URL"}: .: {} f:name: {} f:value: {} k:{"name":"RELATED_IMAGE_TEST_OPERATOR_MANAGER_IMAGE_URL"}: .: {} f:name: {} f:value: {} k:{"name":"RELATED_IMAGE_TEST_TEMPEST_IMAGE_URL_DEFAULT"}: .: {} f:name: {} f:value: {} k:{"name":"RELATED_IMAGE_WATCHER_API_IMAGE_URL_DEFAULT"}: .: {} f:name: {} f:value: {} k:{"name":"RELATED_IMAGE_WATCHER_APPLIER_IMAGE_URL_DEFAULT"}: .: {} f:name: {} f:value: {} k:{"name":"RELATED_IMAGE_WATCHER_DECISION_ENGINE_IMAGE_URL_DEFAULT"}: .: {} f:name: {} f:value: {} k:{"name":"RELATED_IMAGE_WATCHER_OPERATOR_MANAGER_IMAGE_URL"}: .: {} f:name: {} f:value: {} k:{"name":"RENEW_DEADLINE"}: .: {} f:name: {} f:value: {} k:{"name":"RETRY_PERIOD"}: .: {} f:name: {} f:value: {} k:{"name":"TEST_ANSIBLETEST_IMAGE_URL_DEFAULT"}: .: {} f:name: {} f:value: {} k:{"name":"TEST_HORIZONTEST_IMAGE_URL_DEFAULT"}: .: {} f:name: {} f:value: {} k:{"name":"TEST_TOBIKO_IMAGE_URL_DEFAULT"}: .: {} f:name: {} f:value: {} f:image: {} f:imagePullPolicy: {} f:livenessProbe: .: {} f:failureThreshold: {} f:httpGet: .: {} f:path: {} f:port: {} f:scheme: {} f:initialDelaySeconds: {} f:periodSeconds: {} f:successThreshold: {} f:timeoutSeconds: {} f:name: {} f:readinessProbe: .: {} f:failureThreshold: {} f:httpGet: .: {} f:path: {} f:port: {} f:scheme: {} f:initialDelaySeconds: {} f:periodSeconds: {} f:successThreshold: {} f:timeoutSeconds: {} f:resources: .: {} f:limits: .: {} f:cpu: {} f:memory: {} f:requests: .: {} f:cpu: {} f:memory: {} f:securityContext: .: {} f:allowPrivilegeEscalation: {} f:terminationMessagePath: {} f:terminationMessagePolicy: {} f:dnsPolicy: {} f:enableServiceLinks: {} f:restartPolicy: {} f:schedulerName: {} f:securityContext: .: {} f:runAsNonRoot: {} f:serviceAccount: {} f:serviceAccountName: {} f:terminationGracePeriodSeconds: {} f:tolerations: {} manager: kube-controller-manager operation: Update time: '2025-12-13T07:31:57Z' - apiVersion: v1 fieldsType: FieldsV1 fieldsV1: f:metadata: f:annotations: f:k8s.v1.cni.cncf.io/network-status: {} manager: multus-daemon operation: Update subresource: status time: '2025-12-13T07:31:57Z' - apiVersion: v1 fieldsType: FieldsV1 fieldsV1: f:status: f:conditions: k:{"type":"ContainersReady"}: .: {} f:lastProbeTime: {} f:lastTransitionTime: {} f:status: {} f:type: {} k:{"type":"Initialized"}: .: {} f:lastProbeTime: {} f:lastTransitionTime: {} f:status: {} f:type: {} k:{"type":"PodReadyToStartContainers"}: .: {} f:lastProbeTime: {} f:lastTransitionTime: {} f:status: {} f:type: {} k:{"type":"Ready"}: .: {} f:lastProbeTime: {} f:lastTransitionTime: {} f:status: {} f:type: {} f:containerStatuses: {} f:hostIP: {} f:hostIPs: {} f:phase: {} f:podIP: {} f:podIPs: .: {} k:{"ip":"10.217.0.252"}: .: {} f:ip: {} f:startTime: {} manager: kubelet operation: Update subresource: status time: '2025-12-13T07:32:07Z' name: openstack-operator-controller-operator-859586489-hlw4r namespace: openstack-operators ownerReferences: - apiVersion: apps/v1 blockOwnerDeletion: true controller: true kind: ReplicaSet name: openstack-operator-controller-operator-859586489 uid: 0b339b78-adab-4cd2-8ec5-695412000d0b resourceVersion: '61898' uid: 24da4990-742e-476a-aa8b-0a30e8dc0930 spec: containers: - args: - --leader-elect - --health-probe-bind-address=:8081 command: - /operator env: - name: RELATED_IMAGE_AGENT_IMAGE_URL_DEFAULT value: quay.io/openstack-k8s-operators/openstack-baremetal-operator-agent@sha256:add611bf73d5aab1ac07ef665281ed0e5ad1aded495b8b32927aa2e726abb29a - name: RELATED_IMAGE_ANSIBLEEE_IMAGE_URL_DEFAULT value: quay.io/openstack-k8s-operators/openstack-ansibleee-runner@sha256:5a3782b78f695106548597c758c23e5d812e81cb0b860f1fd4fe88587351337e - name: RELATED_IMAGE_AODH_API_IMAGE_URL_DEFAULT value: quay.io/podified-antelope-centos9/openstack-aodh-api@sha256:36946a77001110f391fb254ec77129803a6b7c34dacfa1a4c8c51aa8d23d57c5 - name: RELATED_IMAGE_AODH_EVALUATOR_IMAGE_URL_DEFAULT value: quay.io/podified-antelope-centos9/openstack-aodh-evaluator@sha256:dd58b29b5d88662a621c685c2b76fe8a71cc9e82aa85dff22a66182a6ceef3ae - name: RELATED_IMAGE_AODH_LISTENER_IMAGE_URL_DEFAULT value: quay.io/podified-antelope-centos9/openstack-aodh-listener@sha256:fc47ed1c6249c9f6ef13ef1eac82d5a34819a715dea5117d33df0d0dc69ace8b - name: RELATED_IMAGE_AODH_NOTIFIER_IMAGE_URL_DEFAULT value: quay.io/podified-antelope-centos9/openstack-aodh-notifier@sha256:e21d35c272d016f4dbd323dc827ee83538c96674adfb188e362aa652ce167b61 - name: RELATED_IMAGE_APACHE_IMAGE_URL_DEFAULT value: registry.redhat.io/ubi9/httpd-24@sha256:6b929971283d69f485a7d3e449fb5a3dd65d5a4de585c73419e776821d00062c - name: RELATED_IMAGE_BARBICAN_API_IMAGE_URL_DEFAULT value: quay.io/podified-antelope-centos9/openstack-barbican-api@sha256:fe32d3ea620f0c7ecfdde9bbf28417fde03bc18c6f60b1408fa8da24d8188f16 - name: RELATED_IMAGE_BARBICAN_KEYSTONE_LISTENER_IMAGE_URL_DEFAULT value: quay.io/podified-antelope-centos9/openstack-barbican-keystone-listener@sha256:c2ace235f775334be02d78928802b76309543e869cc6b4b55843ee546691e6c3 - name: RELATED_IMAGE_BARBICAN_WORKER_IMAGE_URL_DEFAULT value: quay.io/podified-antelope-centos9/openstack-barbican-worker@sha256:be77cc58b87f299b42bb2cbe74f3f8d028b8c887851a53209441b60e1363aeb5 - name: RELATED_IMAGE_CEILOMETER_CENTRAL_IMAGE_URL_DEFAULT value: quay.io/podified-antelope-centos9/openstack-ceilometer-central@sha256:5a548c25fe3d02f7a042cb0a6d28fc8039a34c4a3b3d07aadda4aba3a926e777 - name: RELATED_IMAGE_CEILOMETER_COMPUTE_IMAGE_URL_DEFAULT value: quay.io/podified-antelope-centos9/openstack-ceilometer-compute@sha256:41dc9cf27a902d9c7b392d730bd761cf3c391a548a841e9e4d38e1571f3c53bf - name: RELATED_IMAGE_CEILOMETER_IPMI_IMAGE_URL_DEFAULT value: quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi@sha256:174f8f712eb5fdda5061a1a68624befb27bbe766842653788583ec74c5ae506a - name: RELATED_IMAGE_CEILOMETER_NOTIFICATION_IMAGE_URL_DEFAULT value: quay.io/podified-antelope-centos9/openstack-ceilometer-notification@sha256:df14f6de785b8aefc38ceb5b47088405224cfa914977c9ab811514cc77b08a67 - name: RELATED_IMAGE_CEILOMETER_MYSQLD_EXPORTER_IMAGE_URL_DEFAULT value: quay.io/prometheus/mysqld-exporter@sha256:7211a617ec657701ca819aa0ba28e1d5750f5bf2c1391b755cc4a48cc360b0fa - name: RELATED_IMAGE_CEILOMETER_SGCORE_IMAGE_URL_DEFAULT value: quay.io/openstack-k8s-operators/sg-core@sha256:09b5017c95d7697e66b9c64846bc48ef5826a009cba89b956ec54561e5f4a2d1 - name: RELATED_IMAGE_KSM_IMAGE_URL_DEFAULT value: registry.k8s.io/kube-state-metrics/kube-state-metrics@sha256:db384bf43222b066c378e77027a675d4cd9911107adba46c2922b3a55e10d6fb - name: RELATED_IMAGE_CINDER_API_IMAGE_URL_DEFAULT value: quay.io/podified-antelope-centos9/openstack-cinder-api@sha256:b59b7445e581cc720038107e421371c86c5765b2967e77d884ef29b1d9fd0f49 - name: RELATED_IMAGE_CINDER_BACKUP_IMAGE_URL_DEFAULT value: quay.io/podified-antelope-centos9/openstack-cinder-backup@sha256:b8d76f96b6f17a3318d089c0b5c0e6c292d969ab392cdcc708ec0f0188c953ae - name: RELATED_IMAGE_CINDER_SCHEDULER_IMAGE_URL_DEFAULT value: quay.io/podified-antelope-centos9/openstack-cinder-scheduler@sha256:43c55407c7c9b4141482533546e6570535373f7e36df374dfbbe388293c19dbf - name: RELATED_IMAGE_CINDER_VOLUME_IMAGE_URL_DEFAULT value: quay.io/podified-antelope-centos9/openstack-cinder-volume@sha256:097816f289af117f14cd8ee1678a9635e8da6de4a1bde834d02199c4ef65c5c0 - name: RELATED_IMAGE_CLOUDKITTY_API_IMAGE_URL_DEFAULT value: quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api@sha256:744c4b41194e2cb21e83147626d64fd72438a72d51bb32c3ad90cf1f9711fed1 - name: RELATED_IMAGE_CLOUDKITTY_PROC_IMAGE_URL_DEFAULT value: quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-processor@sha256:c980be07bda5796425ea2d727826efb48caf3927a425751d5609915a7f68e87e - name: RELATED_IMAGE_DESIGNATE_API_IMAGE_URL_DEFAULT value: quay.io/podified-antelope-centos9/openstack-designate-api@sha256:281668af8ed34c2464f3593d350cf7b695b41b81f40cc539ad74b7b65822afb9 - name: RELATED_IMAGE_DESIGNATE_BACKENDBIND9_IMAGE_URL_DEFAULT value: quay.io/podified-antelope-centos9/openstack-designate-backend-bind9@sha256:84319e5dd6569ea531e64b688557c2a2e20deb5225f3d349e402e34858f00fe7 - name: RELATED_IMAGE_DESIGNATE_CENTRAL_IMAGE_URL_DEFAULT value: quay.io/podified-antelope-centos9/openstack-designate-central@sha256:acb53e0e210562091843c212bc0cf5541daacd6f2bd18923430bae8c36578731 - name: RELATED_IMAGE_DESIGNATE_MDNS_IMAGE_URL_DEFAULT value: quay.io/podified-antelope-centos9/openstack-designate-mdns@sha256:be6f4002842ebadf30d035721567a7e669f12a6eef8c00dc89030b3b08f3dd2c - name: RELATED_IMAGE_DESIGNATE_PRODUCER_IMAGE_URL_DEFAULT value: quay.io/podified-antelope-centos9/openstack-designate-producer@sha256:988635be61f6ed8c0d707622193b7efe8e9b1dc7effbf9b09d2db5ec593b59e7 - name: RELATED_IMAGE_DESIGNATE_UNBOUND_IMAGE_URL_DEFAULT value: quay.io/podified-antelope-centos9/openstack-unbound@sha256:63e08752678a68571e1c54ceea42c113af493a04cdc22198a3713df7b53f87e5 - name: RELATED_IMAGE_DESIGNATE_WORKER_IMAGE_URL_DEFAULT value: quay.io/podified-antelope-centos9/openstack-designate-worker@sha256:6741d06b0f1bbeb2968807dc5be45853cdd3dfb9cc7ea6ef23e909ae24f3cbf4 - name: RELATED_IMAGE_EDPM_FRR_IMAGE_URL_DEFAULT value: quay.io/podified-antelope-centos9/openstack-frr@sha256:1803a36d1a397a5595dddb4a2f791ab9443d3af97391a53928fa495ca7032d93 - name: RELATED_IMAGE_EDPM_ISCSID_IMAGE_URL_DEFAULT value: quay.io/podified-antelope-centos9/openstack-iscsid@sha256:d163fcf801d67d9c67b2ae4368675b75714db7c531de842aad43979a888c5d57 - name: RELATED_IMAGE_EDPM_LOGROTATE_CROND_IMAGE_URL_DEFAULT value: quay.io/podified-antelope-centos9/openstack-cron@sha256:15bf81d933a44128cb6f3264632a9563337eb3bfe82c4a33c746595467d3b0c3 - name: RELATED_IMAGE_EDPM_MULTIPATHD_IMAGE_URL_DEFAULT value: quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f - name: RELATED_IMAGE_EDPM_NEUTRON_DHCP_AGENT_IMAGE_URL_DEFAULT value: quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent@sha256:3a08e21338f651a90ee83ae46242b8c80c64488144f27a77848517049c3a8f5d - name: RELATED_IMAGE_EDPM_NEUTRON_METADATA_AGENT_IMAGE_URL_DEFAULT value: quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2 - name: RELATED_IMAGE_EDPM_NEUTRON_OVN_AGENT_IMAGE_URL_DEFAULT value: quay.io/podified-antelope-centos9/openstack-neutron-ovn-agent@sha256:ebeb4443ab9f9360925f7abd9c24b7a453390d678f79ed247d2042dcc6f9c3fc - name: RELATED_IMAGE_EDPM_NEUTRON_SRIOV_AGENT_IMAGE_URL_DEFAULT value: quay.io/podified-antelope-centos9/openstack-neutron-sriov-agent@sha256:04bb4cd601b08034c6cba18e701fcd36026ec4340402ed710a0bbd09d8e4884d - name: RELATED_IMAGE_EDPM_NODE_EXPORTER_IMAGE_URL_DEFAULT value: quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c - name: RELATED_IMAGE_EDPM_KEPLER_IMAGE_URL_DEFAULT value: quay.io/sustainable_computing_io/kepler@sha256:581b65b646301e0fcb07582150ba63438f1353a85bf9acf1eb2acb4ce71c58bd - name: RELATED_IMAGE_OPENSTACK_NETWORK_EXPORTER_IMAGE_URL_DEFAULT value: quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7 - name: RELATED_IMAGE_EDPM_OVN_BGP_AGENT_IMAGE_URL_DEFAULT value: quay.io/podified-antelope-centos9/openstack-ovn-bgp-agent@sha256:27b80783b7d4658d89dda9a09924e9ee472908a8fa1c86bcf3f773d17a4196e0 - name: RELATED_IMAGE_EDPM_PODMAN_EXPORTER_IMAGE_URL_DEFAULT value: quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd - name: RELATED_IMAGE_GLANCE_API_IMAGE_URL_DEFAULT value: quay.io/podified-antelope-centos9/openstack-glance-api@sha256:e4aa4ebbb1e581a12040e9ad2ae2709ac31b5d965bb64fc4252d1028b05c565f - name: RELATED_IMAGE_HEAT_API_IMAGE_URL_DEFAULT value: quay.io/podified-antelope-centos9/openstack-heat-api@sha256:8cb133c5a5551e1aa11ef3326149db1babbf00924d0ff493ebe3346b69fd4b5b - name: RELATED_IMAGE_HEAT_CFNAPI_IMAGE_URL_DEFAULT value: quay.io/podified-antelope-centos9/openstack-heat-api-cfn@sha256:13c3567176bb2d033f6c6b30e20404bd67a217e2537210bf222f3afe0c8619b7 - name: RELATED_IMAGE_HEAT_ENGINE_IMAGE_URL_DEFAULT value: quay.io/podified-antelope-centos9/openstack-heat-engine@sha256:60ac3446d57f1a97a6ca2d8e6584b00aa18704bc2707a7ac1a6a28c6d685d215 - name: RELATED_IMAGE_HORIZON_IMAGE_URL_DEFAULT value: quay.io/podified-antelope-centos9/openstack-horizon@sha256:dd7600bc5278c663cfcfecafd3fb051a2cd2ddc3c1efb07738bf09512aa23ae7 - name: RELATED_IMAGE_INFRA_MEMCACHED_IMAGE_URL_DEFAULT value: quay.io/podified-antelope-centos9/openstack-memcached@sha256:e47191ba776414b781b3e27b856ab45a03b9480c7dc2b1addb939608794882dc - name: RELATED_IMAGE_INFRA_REDIS_IMAGE_URL_DEFAULT value: quay.io/podified-antelope-centos9/openstack-redis@sha256:7e7788d1aae251e60f4012870140c65bce9760cd27feaeec5f65c42fe4ffce77 - name: RELATED_IMAGE_IRONIC_API_IMAGE_URL_DEFAULT value: quay.io/podified-antelope-centos9/openstack-ironic-api@sha256:6a401117007514660c694248adce8136d83559caf1b38e475935335e09ac954a - name: RELATED_IMAGE_IRONIC_CONDUCTOR_IMAGE_URL_DEFAULT value: quay.io/podified-antelope-centos9/openstack-ironic-conductor@sha256:364d50f873551805782c23264570eff40e3807f35d9bccdd456515b4e31da488 - name: RELATED_IMAGE_IRONIC_INSPECTOR_IMAGE_URL_DEFAULT value: quay.io/podified-antelope-centos9/openstack-ironic-inspector@sha256:2d72dd490576e0cb670d21a08420888f3758d64ed0cbd2ef8b9aa8488ad2ce40 - name: RELATED_IMAGE_IRONIC_NEUTRON_AGENT_IMAGE_URL_DEFAULT value: quay.io/podified-antelope-centos9/openstack-ironic-neutron-agent@sha256:96fdf7cddf31509ee63950a9d61320d0b01beb1212e28f37a6e872d6589ded22 - name: RELATED_IMAGE_IRONIC_PXE_IMAGE_URL_DEFAULT value: quay.io/podified-antelope-centos9/openstack-ironic-pxe@sha256:8b7534a2999075f919fc162d21f76026e8bf781913cc3d2ac07e484e9b2fc596 - name: RELATED_IMAGE_IRONIC_PYTHON_AGENT_IMAGE_URL_DEFAULT value: quay.io/podified-antelope-centos9/ironic-python-agent@sha256:d65eaaea2ab02d63af9d8a106619908fa01a2e56bd6753edc5590e66e46270db - name: RELATED_IMAGE_KEYSTONE_API_IMAGE_URL_DEFAULT value: quay.io/podified-antelope-centos9/openstack-keystone@sha256:d042d7f91bafb002affff8cf750d694a0da129377255c502028528fe2280e790 - name: RELATED_IMAGE_MANILA_API_IMAGE_URL_DEFAULT value: quay.io/podified-antelope-centos9/openstack-manila-api@sha256:a8faef9ea5e8ef8327b7fbb9b9cafc74c38c09c7e3b2365a7cad5eb49766f71d - name: RELATED_IMAGE_MANILA_SCHEDULER_IMAGE_URL_DEFAULT value: quay.io/podified-antelope-centos9/openstack-manila-scheduler@sha256:88aa46ea03a5584560806aa4b093584fda6b2f54c562005b72be2e3615688090 - name: RELATED_IMAGE_MANILA_SHARE_IMAGE_URL_DEFAULT value: quay.io/podified-antelope-centos9/openstack-manila-share@sha256:c08ecdfb7638c1897004347d835bdbabacff40a345f64c2b3111c377096bfa56 - name: RELATED_IMAGE_MARIADB_IMAGE_URL_DEFAULT value: quay.io/podified-antelope-centos9/openstack-mariadb@sha256:ed0f8ba03f3ce47a32006d730c3049455325eb2c3b98b9fd6b3fb9901004df13 - name: RELATED_IMAGE_NET_UTILS_IMAGE_URL_DEFAULT value: quay.io/podified-antelope-centos9/openstack-netutils@sha256:8b4025a4f30e83acc0b51ac063eea701006a302a1acbdec53f54b540270887f7 - name: RELATED_IMAGE_NEUTRON_API_IMAGE_URL_DEFAULT value: quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:ea0bf67f1aa5d95a9a07b9c8692c293470f1311792c55d3d57f1f92e56689c33 - name: RELATED_IMAGE_NOVA_API_IMAGE_URL_DEFAULT value: quay.io/podified-antelope-centos9/openstack-nova-api@sha256:4992f5ddbd20cca07e750846b2dbe7c51c5766c3002c388f8d8a158e347ec63d - name: RELATED_IMAGE_NOVA_COMPUTE_IMAGE_URL_DEFAULT value: quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:526afed30c44ef41d54d63a4f4db122bc603f775243ae350a59d2e0b5050076b - name: RELATED_IMAGE_NOVA_CONDUCTOR_IMAGE_URL_DEFAULT value: quay.io/podified-antelope-centos9/openstack-nova-conductor@sha256:22f097cb86b28ac48dc670ed7e0e841280bef1608f11b2b4536fbc2d2a6a90be - name: RELATED_IMAGE_NOVA_NOVNC_IMAGE_URL_DEFAULT value: quay.io/podified-antelope-centos9/openstack-nova-novncproxy@sha256:20b3ad38accb9eb8849599280a263d3436a5af03d89645e5ec4508586297ffde - name: RELATED_IMAGE_NOVA_SCHEDULER_IMAGE_URL_DEFAULT value: quay.io/podified-antelope-centos9/openstack-nova-scheduler@sha256:378ed518b68ea809cffa2ff7a93d51e52cfc53af14eedc978924fdabccef0325 - name: RELATED_IMAGE_OCTAVIA_API_IMAGE_URL_DEFAULT value: quay.io/podified-antelope-centos9/openstack-octavia-api@sha256:8c3632033f8c004f31a1c7c57c5ca7b450a11e9170a220b8943b57f80717c70c - name: RELATED_IMAGE_OCTAVIA_HEALTHMANAGER_IMAGE_URL_DEFAULT value: quay.io/podified-antelope-centos9/openstack-octavia-health-manager@sha256:3f746f7c6a8c48c0f4a800dcb4bc49bfbc4de4a9ca6a55d8f22bc515a92ea1d9 - name: RELATED_IMAGE_OCTAVIA_HOUSEKEEPING_IMAGE_URL_DEFAULT value: quay.io/podified-antelope-centos9/openstack-octavia-housekeeping@sha256:e1f7bf105190c3cbbfcf0aeeb77a92d1466100ba8377221ed5eee228949e05bd - name: RELATED_IMAGE_OCTAVIA_WORKER_IMAGE_URL_DEFAULT value: quay.io/podified-antelope-centos9/openstack-octavia-worker@sha256:f2e0025727efb95efa65e6af6338ae3fc79bf61095d6d54931a0be8d7fe9acac - name: RELATED_IMAGE_OCTAVIA_RSYSLOG_IMAGE_URL_DEFAULT value: quay.io/podified-antelope-centos9/openstack-rsyslog@sha256:954b4c60705b229a968aba3b5b35ab02759378706103ed1189fae3e3316fac35 - name: RELATED_IMAGE_OPENSTACK_CLIENT_IMAGE_URL_DEFAULT value: quay.io/podified-antelope-centos9/openstack-openstackclient@sha256:2b4f8494513a3af102066fec5868ab167ac8664aceb2f0c639d7a0b60260a944 - name: RELATED_IMAGE_OS_CONTAINER_IMAGE_URL_DEFAULT value: quay.io/podified-antelope-centos9/edpm-hardened-uefi@sha256:194121c2d79401bd41f75428a437fe32a5806a6a160f7d80798ff66baed9afa5 - name: RELATED_IMAGE_OVN_CONTROLLER_IMAGE_URL_DEFAULT value: quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de - name: RELATED_IMAGE_OVN_CONTROLLER_OVS_IMAGE_URL_DEFAULT value: quay.io/podified-antelope-centos9/openstack-ovn-base@sha256:df45459c449f64cc6471e98c0890ac00dcc77a940f85d4e7e9d9dd52990d65b3 - name: RELATED_IMAGE_OVN_NB_DBCLUSTER_IMAGE_URL_DEFAULT value: quay.io/podified-antelope-centos9/openstack-ovn-nb-db-server@sha256:947c1bb9373b7d3f2acea104a5666e394c830111bf80d133f1fe7238e4d06f28 - name: RELATED_IMAGE_OVN_NORTHD_IMAGE_URL_DEFAULT value: quay.io/podified-antelope-centos9/openstack-ovn-northd@sha256:425ebddc9d6851ee9c730e67eaf43039943dc7937fb11332a41335a9114b2d44 - name: RELATED_IMAGE_OVN_SB_DBCLUSTER_IMAGE_URL_DEFAULT value: quay.io/podified-antelope-centos9/openstack-ovn-sb-db-server@sha256:bea03c7c34dc6ef8bc163e12a8940011b8feebc44a2efaaba2d3c4c6c515d6c8 - name: RELATED_IMAGE_PLACEMENT_API_IMAGE_URL_DEFAULT value: quay.io/podified-antelope-centos9/openstack-placement-api@sha256:33f4e5f7a715d48482ec46a42267ea992fa268585303c4f1bd3cbea072a6348b - name: RELATED_IMAGE_RABBITMQ_IMAGE_URL_DEFAULT value: quay.io/podified-antelope-centos9/openstack-rabbitmq@sha256:e733252aab7f4bc0efbdd712bcd88e44c5498bf1773dba843bc9dcfac324fe3d - name: RELATED_IMAGE_SWIFT_ACCOUNT_IMAGE_URL_DEFAULT value: quay.io/podified-antelope-centos9/openstack-swift-account@sha256:a2280bc80b454dc9e5c95daf74b8a53d6f9e42fc16d45287e089fc41014fe1da - name: RELATED_IMAGE_SWIFT_CONTAINER_IMAGE_URL_DEFAULT value: quay.io/podified-antelope-centos9/openstack-swift-container@sha256:88d687a7bb593b2e61598b422baba84d67c114419590a6d83d15327d119ce208 - name: RELATED_IMAGE_SWIFT_OBJECT_IMAGE_URL_DEFAULT value: quay.io/podified-antelope-centos9/openstack-swift-object@sha256:2635e02b99d380b2e547013c09c6c8da01bc89b3d3ce570e4d8f8656c7635b0e - name: RELATED_IMAGE_SWIFT_PROXY_IMAGE_URL_DEFAULT value: quay.io/podified-antelope-centos9/openstack-swift-proxy-server@sha256:ac7fefe1c93839c7ccb2aaa0a18751df0e9f64a36a3b4cc1b81d82d7774b8b45 - name: RELATED_IMAGE_TEST_TEMPEST_IMAGE_URL_DEFAULT value: quay.io/podified-antelope-centos9/openstack-tempest-all@sha256:a357cf166caaeea230f8a912aceb042e3170c5d680844e8f97b936baa10834ed - name: RELATED_IMAGE_WATCHER_API_IMAGE_URL_DEFAULT value: quay.io/podified-master-centos9/openstack-watcher-api@sha256:bf2a07cbf4aec8e8283e14fb134605b15a61db6d3f7965a5e2e3cac66018c73a - name: RELATED_IMAGE_WATCHER_APPLIER_IMAGE_URL_DEFAULT value: quay.io/podified-master-centos9/openstack-watcher-applier@sha256:10a8ff59cb8b91189b60c6f28155b62cbe2983fb14c053d74967d219c4f8b2af - name: RELATED_IMAGE_WATCHER_DECISION_ENGINE_IMAGE_URL_DEFAULT value: quay.io/podified-master-centos9/openstack-watcher-decision-engine@sha256:4466fc51f6461209d9a75e53f13a88171143fe5977797a02406b57f32ffaf0ab - name: TEST_TOBIKO_IMAGE_URL_DEFAULT value: quay.io/podified-antelope-centos9/openstack-tobiko:current-podified - name: TEST_ANSIBLETEST_IMAGE_URL_DEFAULT value: quay.io/podified-antelope-centos9/openstack-ansible-tests:current-podified - name: TEST_HORIZONTEST_IMAGE_URL_DEFAULT value: quay.io/podified-antelope-centos9/openstack-horizontest:current-podified - name: RELATED_IMAGE_OPENSTACK_MUST_GATHER_DEFAULT value: quay.io/openstack-k8s-operators/openstack-must-gather@sha256:854a802357b4f565a366fce3bf29b20c1b768ec4ab7e822ef52dfc2fef000d2c - name: RELATED_IMAGE_BARBICAN_OPERATOR_MANAGER_IMAGE_URL value: quay.io/openstack-k8s-operators/barbican-operator@sha256:f6059a0fbf031d34dcf086d14ce8c0546caeaee23c5780e90b5037c5feee9fea - name: RELATED_IMAGE_CINDER_OPERATOR_MANAGER_IMAGE_URL value: quay.io/openstack-k8s-operators/cinder-operator@sha256:981b6a8f95934a86c5f10ef6e198b07265aeba7f11cf84b9ccd13dfaf06f3ca3 - name: RELATED_IMAGE_DESIGNATE_OPERATOR_MANAGER_IMAGE_URL value: quay.io/openstack-k8s-operators/designate-operator@sha256:900050d3501c0785b227db34b89883efe68247816e5c7427cacb74f8aa10605a - name: RELATED_IMAGE_GLANCE_OPERATOR_MANAGER_IMAGE_URL value: quay.io/openstack-k8s-operators/glance-operator@sha256:5370dc4a8e776923eec00bb50cbdb2e390e9dde50be26bdc04a216bd2d6b5027 - name: RELATED_IMAGE_HEAT_OPERATOR_MANAGER_IMAGE_URL value: quay.io/openstack-k8s-operators/heat-operator@sha256:c4abfc148600dfa85915f3dc911d988ea2335f26cb6b8d749fe79bfe53e5e429 - name: RELATED_IMAGE_HORIZON_OPERATOR_MANAGER_IMAGE_URL value: quay.io/openstack-k8s-operators/horizon-operator@sha256:9e847f4dbdea19ab997f32a02b3680a9bd966f9c705911645c3866a19fda9ea5 - name: RELATED_IMAGE_INFRA_OPERATOR_MANAGER_IMAGE_URL value: quay.io/openstack-k8s-operators/infra-operator@sha256:ccc60d56d8efc2e91a7d8a7131eb7e06c189c32247f2a819818c084ba2e2f2ab - name: RELATED_IMAGE_IRONIC_OPERATOR_MANAGER_IMAGE_URL value: quay.io/openstack-k8s-operators/ironic-operator@sha256:5bdb3685be3ddc1efd62e16aaf2fa96ead64315e26d52b1b2a7d8ac01baa1e87 - name: RELATED_IMAGE_KEYSTONE_OPERATOR_MANAGER_IMAGE_URL value: quay.io/openstack-k8s-operators/keystone-operator@sha256:72ad6517987f674af0d0ae092cbb874aeae909c8b8b60188099c311762ebc8f7 - name: RELATED_IMAGE_MANILA_OPERATOR_MANAGER_IMAGE_URL value: quay.io/openstack-k8s-operators/manila-operator@sha256:44126f9c6b1d2bf752ddf989e20a4fc4cc1c07723d4fcb78465ccb2f55da6b3a - name: RELATED_IMAGE_MARIADB_OPERATOR_MANAGER_IMAGE_URL value: quay.io/openstack-k8s-operators/mariadb-operator@sha256:424da951f13f1fbe9083215dc9f5088f90676dd813f01fdf3c1a8639b61cbaad - name: RELATED_IMAGE_NEUTRON_OPERATOR_MANAGER_IMAGE_URL value: quay.io/openstack-k8s-operators/neutron-operator@sha256:0b3fb69f35c151895d3dffd514974a9f9fe1c77c3bca69b78b81efb183cf4557 - name: RELATED_IMAGE_NOVA_OPERATOR_MANAGER_IMAGE_URL value: quay.io/openstack-k8s-operators/nova-operator@sha256:779f0cee6024d0fb8f259b036fe790e62aa5a3b0431ea9bf15a6e7d02e2e5670 - name: RELATED_IMAGE_OCTAVIA_OPERATOR_MANAGER_IMAGE_URL value: quay.io/openstack-k8s-operators/octavia-operator@sha256:d9a3694865a7d54ee96397add18c3898886e98d079aa20876a0f4de1fa7a7168 - name: RELATED_IMAGE_OPENSTACK_BAREMETAL_OPERATOR_MANAGER_IMAGE_URL value: quay.io/openstack-k8s-operators/openstack-baremetal-operator@sha256:9d539fb6b72f91cfc6200bb91b7c6dbaeab17c7711342dd3a9549c66762a2d48 - name: RELATED_IMAGE_OVN_OPERATOR_MANAGER_IMAGE_URL value: quay.io/openstack-k8s-operators/ovn-operator@sha256:635a4aef9d6f0b799e8ec91333dbb312160c001d05b3c63f614c124e0b67cb59 - name: RELATED_IMAGE_PLACEMENT_OPERATOR_MANAGER_IMAGE_URL value: quay.io/openstack-k8s-operators/placement-operator@sha256:d29650b006da97eb9178fcc58f2eb9fead8c2b414fac18f86a3c3a1507488c4f - name: RELATED_IMAGE_RABBITMQ_CLUSTER_OPERATOR_MANAGER_IMAGE_URL value: quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2 - name: RELATED_IMAGE_SWIFT_OPERATOR_MANAGER_IMAGE_URL value: quay.io/openstack-k8s-operators/swift-operator@sha256:3aa109bb973253ae9dcf339b9b65abbd1176cdb4be672c93e538a5f113816991 - name: RELATED_IMAGE_TELEMETRY_OPERATOR_MANAGER_IMAGE_URL value: quay.io/openstack-k8s-operators/telemetry-operator@sha256:f27e732ec1faee765461bf137d9be81278b2fa39675019a73622755e1e610b6f - name: RELATED_IMAGE_TEST_OPERATOR_MANAGER_IMAGE_URL value: 38.129.56.153:5001/openstack-k8s-operators/test-operator:d19f803f400b92d4afd97dd749e753a7435bfaca - name: RELATED_IMAGE_WATCHER_OPERATOR_MANAGER_IMAGE_URL value: quay.io/openstack-k8s-operators/watcher-operator@sha256:961417d59f527d925ac48ff6a11de747d0493315e496e34dc83d76a1a1fff58a - name: OPENSTACK_RELEASE_VERSION value: 0.5.0-1765567684 - name: OPERATOR_IMAGE_URL value: quay.io/openstack-k8s-operators/openstack-operator@sha256:e2fbc2e7072eb824d265ecca0bc2eb120464a917d1473445d33f02c97487ea39 - name: ENABLE_WEBHOOKS value: 'false' - name: LEASE_DURATION value: '30' - name: RENEW_DEADLINE value: '20' - name: RETRY_PERIOD value: '5' - name: OPERATOR_CONDITION_NAME value: openstack-operator.v0.5.0 image: quay.io/openstack-k8s-operators/openstack-operator@sha256:e2fbc2e7072eb824d265ecca0bc2eb120464a917d1473445d33f02c97487ea39 imagePullPolicy: IfNotPresent livenessProbe: failureThreshold: 3 httpGet: path: /healthz port: 8081 scheme: HTTP initialDelaySeconds: 15 periodSeconds: 20 successThreshold: 1 timeoutSeconds: 1 name: operator readinessProbe: failureThreshold: 3 httpGet: path: /readyz port: 8081 scheme: HTTP initialDelaySeconds: 5 periodSeconds: 10 successThreshold: 1 timeoutSeconds: 1 resources: limits: cpu: 500m memory: 256Mi requests: cpu: 10m memory: 128Mi securityContext: allowPrivilegeEscalation: false capabilities: drop: - ALL runAsUser: 1000660000 terminationMessagePath: /dev/termination-log terminationMessagePolicy: File volumeMounts: - mountPath: /var/run/secrets/kubernetes.io/serviceaccount name: kube-api-access-h5xtj readOnly: true dnsPolicy: ClusterFirst enableServiceLinks: true imagePullSecrets: - name: openstack-operator-controller-operator-dockercfg-hp5tj nodeName: crc preemptionPolicy: PreemptLowerPriority priority: 0 restartPolicy: Always schedulerName: default-scheduler securityContext: fsGroup: 1000660000 runAsNonRoot: true seLinuxOptions: level: s0:c26,c5 seccompProfile: type: RuntimeDefault serviceAccount: openstack-operator-controller-operator serviceAccountName: openstack-operator-controller-operator terminationGracePeriodSeconds: 10 tolerations: - effect: NoExecute key: node.kubernetes.io/not-ready operator: Exists tolerationSeconds: 120 - effect: NoExecute key: node.kubernetes.io/unreachable operator: Exists tolerationSeconds: 120 - effect: NoSchedule key: node.kubernetes.io/memory-pressure operator: Exists volumes: - name: kube-api-access-h5xtj projected: defaultMode: 420 sources: - serviceAccountToken: expirationSeconds: 3607 path: token - configMap: items: - key: ca.crt path: ca.crt name: kube-root-ca.crt - downwardAPI: items: - fieldRef: apiVersion: v1 fieldPath: metadata.namespace path: namespace - configMap: items: - key: service-ca.crt path: service-ca.crt name: openshift-service-ca.crt status: conditions: - lastProbeTime: null lastTransitionTime: '2025-12-13T07:31:58Z' status: 'True' type: PodReadyToStartContainers - lastProbeTime: null lastTransitionTime: '2025-12-13T07:31:57Z' status: 'True' type: Initialized - lastProbeTime: null lastTransitionTime: '2025-12-13T07:32:07Z' status: 'True' type: Ready - lastProbeTime: null lastTransitionTime: '2025-12-13T07:32:07Z' status: 'True' type: ContainersReady - lastProbeTime: null lastTransitionTime: '2025-12-13T07:31:57Z' status: 'True' type: PodScheduled containerStatuses: - containerID: cri-o://16d66bf93409d80b5a6bfed7bb24770a3e65f7fe2fee92e7e07fc630848198ef image: quay.io/openstack-k8s-operators/openstack-operator@sha256:e2fbc2e7072eb824d265ecca0bc2eb120464a917d1473445d33f02c97487ea39 imageID: quay.io/openstack-k8s-operators/openstack-operator@sha256:e2fbc2e7072eb824d265ecca0bc2eb120464a917d1473445d33f02c97487ea39 lastState: {} name: operator ready: true restartCount: 0 started: true state: running: startedAt: '2025-12-13T07:31:57Z' volumeMounts: - mountPath: /var/run/secrets/kubernetes.io/serviceaccount name: kube-api-access-h5xtj readOnly: true recursiveReadOnly: Disabled hostIP: 192.168.126.11 hostIPs: - ip: 192.168.126.11 phase: Running podIP: 10.217.0.252 podIPs: - ip: 10.217.0.252 qosClass: Burstable startTime: '2025-12-13T07:31:57Z' - apiVersion: v1 kind: Pod metadata: annotations: cluster-autoscaler.kubernetes.io/safe-to-evict: 'true' k8s.ovn.org/pod-networks: '{"default":{"ip_addresses":["10.217.0.50/23"],"mac_address":"0a:58:0a:d9:00:32","gateway_ips":["10.217.0.1"],"routes":[{"dest":"10.217.0.0/22","nextHop":"10.217.0.1"},{"dest":"10.217.4.0/23","nextHop":"10.217.0.1"},{"dest":"169.254.0.5/32","nextHop":"10.217.0.1"},{"dest":"100.64.0.0/16","nextHop":"10.217.0.1"}],"ip_address":"10.217.0.50/23","gateway_ip":"10.217.0.1","role":"primary"}}' k8s.v1.cni.cncf.io/network-status: |- [{ "name": "ovn-kubernetes", "interface": "eth0", "ips": [ "10.217.0.50" ], "mac": "0a:58:0a:d9:00:32", "default": true, "dns": {} }] kubectl.kubernetes.io/last-applied-configuration: | {"apiVersion":"operators.coreos.com/v1alpha1","kind":"CatalogSource","metadata":{"annotations":{},"name":"openstack-operator-index","namespace":"openstack-operators"},"spec":{"image":"quay.io/openstack-k8s-operators/openstack-operator-index:latest","sourceType":"grpc"}} openshift.io/scc: anyuid creationTimestamp: '2025-12-13T06:58:03Z' generateName: openstack-operator-index- labels: olm.catalogSource: openstack-operator-index olm.managed: 'true' olm.pod-spec-hash: MABo4Ww04eFPoHLn6ek4TWfehh9cnfF89Mmbi managedFields: - apiVersion: v1 fieldsType: FieldsV1 fieldsV1: f:metadata: f:annotations: .: {} f:cluster-autoscaler.kubernetes.io/safe-to-evict: {} f:kubectl.kubernetes.io/last-applied-configuration: {} f:generateName: {} f:labels: .: {} f:olm.catalogSource: {} f:olm.managed: {} f:olm.pod-spec-hash: {} f:ownerReferences: .: {} k:{"uid":"28ce88b9-e78b-48ec-85e1-d5859eb2a55b"}: {} f:spec: f:containers: k:{"name":"registry-server"}: .: {} f:image: {} f:imagePullPolicy: {} f:livenessProbe: .: {} f:exec: .: {} f:command: {} f:failureThreshold: {} f:initialDelaySeconds: {} f:periodSeconds: {} f:successThreshold: {} f:timeoutSeconds: {} f:name: {} f:ports: .: {} k:{"containerPort":50051,"protocol":"TCP"}: .: {} f:containerPort: {} f:name: {} f:protocol: {} f:readinessProbe: .: {} f:exec: .: {} f:command: {} f:failureThreshold: {} f:initialDelaySeconds: {} f:periodSeconds: {} f:successThreshold: {} f:timeoutSeconds: {} f:resources: .: {} f:requests: .: {} f:cpu: {} f:memory: {} f:securityContext: .: {} f:readOnlyRootFilesystem: {} f:startupProbe: .: {} f:exec: .: {} f:command: {} f:failureThreshold: {} f:periodSeconds: {} f:successThreshold: {} f:timeoutSeconds: {} f:terminationMessagePath: {} f:terminationMessagePolicy: {} f:dnsPolicy: {} f:enableServiceLinks: {} f:imagePullSecrets: .: {} k:{"name":"openstack-operator-index-dockercfg-g6mfp"}: {} f:nodeSelector: {} f:restartPolicy: {} f:schedulerName: {} f:securityContext: {} f:serviceAccount: {} f:serviceAccountName: {} f:terminationGracePeriodSeconds: {} manager: catalog operation: Update time: '2025-12-13T06:58:03Z' - apiVersion: v1 fieldsType: FieldsV1 fieldsV1: f:metadata: f:annotations: f:k8s.ovn.org/pod-networks: {} manager: crc operation: Update subresource: status time: '2025-12-13T06:58:03Z' - apiVersion: v1 fieldsType: FieldsV1 fieldsV1: f:metadata: f:annotations: f:k8s.v1.cni.cncf.io/network-status: {} manager: multus-daemon operation: Update subresource: status time: '2025-12-13T06:58:04Z' - apiVersion: v1 fieldsType: FieldsV1 fieldsV1: f:status: f:conditions: k:{"type":"ContainersReady"}: .: {} f:lastProbeTime: {} f:lastTransitionTime: {} f:status: {} f:type: {} k:{"type":"Initialized"}: .: {} f:lastProbeTime: {} f:lastTransitionTime: {} f:status: {} f:type: {} k:{"type":"PodReadyToStartContainers"}: .: {} f:lastProbeTime: {} f:lastTransitionTime: {} f:status: {} f:type: {} k:{"type":"Ready"}: .: {} f:lastProbeTime: {} f:lastTransitionTime: {} f:status: {} f:type: {} f:containerStatuses: {} f:hostIP: {} f:hostIPs: {} f:phase: {} f:podIP: {} f:podIPs: .: {} k:{"ip":"10.217.0.50"}: .: {} f:ip: {} f:startTime: {} manager: kubelet operation: Update subresource: status time: '2025-12-13T06:58:14Z' name: openstack-operator-index-2ktgc namespace: openstack-operators ownerReferences: - apiVersion: operators.coreos.com/v1alpha1 blockOwnerDeletion: false controller: true kind: CatalogSource name: openstack-operator-index uid: 28ce88b9-e78b-48ec-85e1-d5859eb2a55b resourceVersion: '34048' uid: 8a8b2af3-ff75-4a0a-a3ec-6f1b90619082 spec: containers: - image: quay.io/openstack-k8s-operators/openstack-operator-index:latest imagePullPolicy: Always livenessProbe: exec: command: - grpc_health_probe - -addr=:50051 failureThreshold: 3 initialDelaySeconds: 10 periodSeconds: 10 successThreshold: 1 timeoutSeconds: 5 name: registry-server ports: - containerPort: 50051 name: grpc protocol: TCP readinessProbe: exec: command: - grpc_health_probe - -addr=:50051 failureThreshold: 3 initialDelaySeconds: 5 periodSeconds: 10 successThreshold: 1 timeoutSeconds: 5 resources: requests: cpu: 10m memory: 50Mi securityContext: capabilities: drop: - MKNOD readOnlyRootFilesystem: false startupProbe: exec: command: - grpc_health_probe - -addr=:50051 failureThreshold: 10 periodSeconds: 10 successThreshold: 1 timeoutSeconds: 5 terminationMessagePath: /dev/termination-log terminationMessagePolicy: FallbackToLogsOnError volumeMounts: - mountPath: /var/run/secrets/kubernetes.io/serviceaccount name: kube-api-access-ll872 readOnly: true dnsPolicy: ClusterFirst enableServiceLinks: true imagePullSecrets: - name: openstack-operator-index-dockercfg-g6mfp nodeName: crc nodeSelector: kubernetes.io/os: linux preemptionPolicy: PreemptLowerPriority priority: 0 restartPolicy: Always schedulerName: default-scheduler securityContext: seLinuxOptions: level: s0:c26,c5 serviceAccount: openstack-operator-index serviceAccountName: openstack-operator-index terminationGracePeriodSeconds: 30 tolerations: - effect: NoExecute key: node.kubernetes.io/not-ready operator: Exists tolerationSeconds: 300 - effect: NoExecute key: node.kubernetes.io/unreachable operator: Exists tolerationSeconds: 300 - effect: NoSchedule key: node.kubernetes.io/memory-pressure operator: Exists volumes: - name: kube-api-access-ll872 projected: defaultMode: 420 sources: - serviceAccountToken: expirationSeconds: 3607 path: token - configMap: items: - key: ca.crt path: ca.crt name: kube-root-ca.crt - downwardAPI: items: - fieldRef: apiVersion: v1 fieldPath: metadata.namespace path: namespace - configMap: items: - key: service-ca.crt path: service-ca.crt name: openshift-service-ca.crt status: conditions: - lastProbeTime: null lastTransitionTime: '2025-12-13T06:58:05Z' status: 'True' type: PodReadyToStartContainers - lastProbeTime: null lastTransitionTime: '2025-12-13T06:58:03Z' status: 'True' type: Initialized - lastProbeTime: null lastTransitionTime: '2025-12-13T06:58:14Z' status: 'True' type: Ready - lastProbeTime: null lastTransitionTime: '2025-12-13T06:58:14Z' status: 'True' type: ContainersReady - lastProbeTime: null lastTransitionTime: '2025-12-13T06:58:03Z' status: 'True' type: PodScheduled containerStatuses: - containerID: cri-o://b275b9d2f1fac7a195cbbd234b3982b17999a9bd6670d1821207851eba590c71 image: quay.io/openstack-k8s-operators/openstack-operator-index:latest imageID: quay.io/openstack-k8s-operators/openstack-operator-index@sha256:0a788b725574fecdf2e6a8fa2c82830fac53413dd13caff7464ff3fb557501f1 lastState: {} name: registry-server ready: true restartCount: 0 started: true state: running: startedAt: '2025-12-13T06:58:05Z' volumeMounts: - mountPath: /var/run/secrets/kubernetes.io/serviceaccount name: kube-api-access-ll872 readOnly: true recursiveReadOnly: Disabled hostIP: 192.168.126.11 hostIPs: - ip: 192.168.126.11 phase: Running podIP: 10.217.0.50 podIPs: - ip: 10.217.0.50 qosClass: Burstable startTime: '2025-12-13T06:58:03Z' - apiVersion: v1 kind: Pod metadata: annotations: k8s.ovn.org/pod-networks: '{"default":{"ip_addresses":["10.217.0.82/23"],"mac_address":"0a:58:0a:d9:00:52","gateway_ips":["10.217.0.1"],"routes":[{"dest":"10.217.0.0/22","nextHop":"10.217.0.1"},{"dest":"10.217.4.0/23","nextHop":"10.217.0.1"},{"dest":"169.254.0.5/32","nextHop":"10.217.0.1"},{"dest":"100.64.0.0/16","nextHop":"10.217.0.1"}],"ip_address":"10.217.0.82/23","gateway_ip":"10.217.0.1","role":"primary"}}' k8s.v1.cni.cncf.io/network-status: |- [{ "name": "ovn-kubernetes", "interface": "eth0", "ips": [ "10.217.0.82" ], "mac": "0a:58:0a:d9:00:52", "default": true, "dns": {} }] kubectl.kubernetes.io/default-container: manager openshift.io/scc: anyuid creationTimestamp: '2025-12-13T06:59:15Z' generateName: ovn-operator-controller-manager-bf6d4f946- labels: app.kubernetes.io/name: ovn-operator control-plane: controller-manager openstack.org/operator-name: ovn pod-template-hash: bf6d4f946 managedFields: - apiVersion: v1 fieldsType: FieldsV1 fieldsV1: f:metadata: f:annotations: f:k8s.ovn.org/pod-networks: {} manager: crc operation: Update subresource: status time: '2025-12-13T06:59:15Z' - apiVersion: v1 fieldsType: FieldsV1 fieldsV1: f:metadata: f:annotations: .: {} f:kubectl.kubernetes.io/default-container: {} f:generateName: {} f:labels: .: {} f:app.kubernetes.io/name: {} f:control-plane: {} f:openstack.org/operator-name: {} f:pod-template-hash: {} f:ownerReferences: .: {} k:{"uid":"cd85ce57-60d4-41c6-b5ad-84038b10f735"}: {} f:spec: f:containers: k:{"name":"manager"}: .: {} f:args: {} f:command: {} f:env: .: {} k:{"name":"ENABLE_WEBHOOKS"}: .: {} f:name: {} f:value: {} k:{"name":"LEASE_DURATION"}: .: {} f:name: {} f:value: {} k:{"name":"METRICS_CERTS"}: .: {} f:name: {} f:value: {} k:{"name":"RENEW_DEADLINE"}: .: {} f:name: {} f:value: {} k:{"name":"RETRY_PERIOD"}: .: {} f:name: {} f:value: {} f:image: {} f:imagePullPolicy: {} f:livenessProbe: .: {} f:failureThreshold: {} f:httpGet: .: {} f:path: {} f:port: {} f:scheme: {} f:initialDelaySeconds: {} f:periodSeconds: {} f:successThreshold: {} f:timeoutSeconds: {} f:name: {} f:readinessProbe: .: {} f:failureThreshold: {} f:httpGet: .: {} f:path: {} f:port: {} f:scheme: {} f:initialDelaySeconds: {} f:periodSeconds: {} f:successThreshold: {} f:timeoutSeconds: {} f:resources: .: {} f:limits: .: {} f:cpu: {} f:memory: {} f:requests: .: {} f:cpu: {} f:memory: {} f:securityContext: .: {} f:allowPrivilegeEscalation: {} f:terminationMessagePath: {} f:terminationMessagePolicy: {} f:dnsPolicy: {} f:enableServiceLinks: {} f:restartPolicy: {} f:schedulerName: {} f:securityContext: .: {} f:runAsNonRoot: {} f:serviceAccount: {} f:serviceAccountName: {} f:terminationGracePeriodSeconds: {} f:tolerations: {} manager: kube-controller-manager operation: Update time: '2025-12-13T06:59:15Z' - apiVersion: v1 fieldsType: FieldsV1 fieldsV1: f:metadata: f:annotations: f:k8s.v1.cni.cncf.io/network-status: {} manager: multus-daemon operation: Update subresource: status time: '2025-12-13T06:59:16Z' - apiVersion: v1 fieldsType: FieldsV1 fieldsV1: f:status: f:conditions: k:{"type":"ContainersReady"}: .: {} f:lastProbeTime: {} f:lastTransitionTime: {} f:status: {} f:type: {} k:{"type":"Initialized"}: .: {} f:lastProbeTime: {} f:lastTransitionTime: {} f:status: {} f:type: {} k:{"type":"PodReadyToStartContainers"}: .: {} f:lastProbeTime: {} f:lastTransitionTime: {} f:status: {} f:type: {} k:{"type":"Ready"}: .: {} f:lastProbeTime: {} f:lastTransitionTime: {} f:status: {} f:type: {} f:containerStatuses: {} f:hostIP: {} f:hostIPs: {} f:phase: {} f:podIP: {} f:podIPs: .: {} k:{"ip":"10.217.0.82"}: .: {} f:ip: {} f:startTime: {} manager: kubelet operation: Update subresource: status time: '2025-12-13T06:59:55Z' name: ovn-operator-controller-manager-bf6d4f946-mtbhx namespace: openstack-operators ownerReferences: - apiVersion: apps/v1 blockOwnerDeletion: true controller: true kind: ReplicaSet name: ovn-operator-controller-manager-bf6d4f946 uid: cd85ce57-60d4-41c6-b5ad-84038b10f735 resourceVersion: '37076' uid: 9f650b3c-af01-4ce4-a702-daab8d5affc5 spec: containers: - args: - --leader-elect - --health-probe-bind-address=:8081 - --metrics-bind-address=127.0.0.1:8080 command: - /manager env: - name: LEASE_DURATION value: '30' - name: RENEW_DEADLINE value: '20' - name: RETRY_PERIOD value: '5' - name: ENABLE_WEBHOOKS value: 'false' - name: METRICS_CERTS value: 'false' image: quay.io/openstack-k8s-operators/ovn-operator@sha256:635a4aef9d6f0b799e8ec91333dbb312160c001d05b3c63f614c124e0b67cb59 imagePullPolicy: IfNotPresent livenessProbe: failureThreshold: 3 httpGet: path: /healthz port: 8081 scheme: HTTP initialDelaySeconds: 15 periodSeconds: 20 successThreshold: 1 timeoutSeconds: 1 name: manager readinessProbe: failureThreshold: 3 httpGet: path: /readyz port: 8081 scheme: HTTP initialDelaySeconds: 5 periodSeconds: 10 successThreshold: 1 timeoutSeconds: 1 resources: limits: cpu: 500m memory: 512Mi requests: cpu: 10m memory: 256Mi securityContext: allowPrivilegeEscalation: false capabilities: drop: - MKNOD terminationMessagePath: /dev/termination-log terminationMessagePolicy: File volumeMounts: - mountPath: /var/run/secrets/kubernetes.io/serviceaccount name: kube-api-access-fcsn7 readOnly: true dnsPolicy: ClusterFirst enableServiceLinks: true imagePullSecrets: - name: ovn-operator-controller-manager-dockercfg-8fxr2 nodeName: crc preemptionPolicy: PreemptLowerPriority priority: 0 restartPolicy: Always schedulerName: default-scheduler securityContext: runAsNonRoot: true seLinuxOptions: level: s0:c26,c5 serviceAccount: ovn-operator-controller-manager serviceAccountName: ovn-operator-controller-manager terminationGracePeriodSeconds: 10 tolerations: - effect: NoExecute key: node.kubernetes.io/not-ready operator: Exists tolerationSeconds: 120 - effect: NoExecute key: node.kubernetes.io/unreachable operator: Exists tolerationSeconds: 120 - effect: NoSchedule key: node.kubernetes.io/memory-pressure operator: Exists volumes: - name: kube-api-access-fcsn7 projected: defaultMode: 420 sources: - serviceAccountToken: expirationSeconds: 3607 path: token - configMap: items: - key: ca.crt path: ca.crt name: kube-root-ca.crt - downwardAPI: items: - fieldRef: apiVersion: v1 fieldPath: metadata.namespace path: namespace - configMap: items: - key: service-ca.crt path: service-ca.crt name: openshift-service-ca.crt status: conditions: - lastProbeTime: null lastTransitionTime: '2025-12-13T06:59:17Z' status: 'True' type: PodReadyToStartContainers - lastProbeTime: null lastTransitionTime: '2025-12-13T06:59:15Z' status: 'True' type: Initialized - lastProbeTime: null lastTransitionTime: '2025-12-13T06:59:55Z' status: 'True' type: Ready - lastProbeTime: null lastTransitionTime: '2025-12-13T06:59:55Z' status: 'True' type: ContainersReady - lastProbeTime: null lastTransitionTime: '2025-12-13T06:59:15Z' status: 'True' type: PodScheduled containerStatuses: - containerID: cri-o://33a4c7d61f9be2b30009ffa950736a7d6b58554233d1029e38870c5959381932 image: quay.io/openstack-k8s-operators/ovn-operator@sha256:635a4aef9d6f0b799e8ec91333dbb312160c001d05b3c63f614c124e0b67cb59 imageID: quay.io/openstack-k8s-operators/ovn-operator@sha256:635a4aef9d6f0b799e8ec91333dbb312160c001d05b3c63f614c124e0b67cb59 lastState: {} name: manager ready: true restartCount: 0 started: true state: running: startedAt: '2025-12-13T06:59:42Z' volumeMounts: - mountPath: /var/run/secrets/kubernetes.io/serviceaccount name: kube-api-access-fcsn7 readOnly: true recursiveReadOnly: Disabled hostIP: 192.168.126.11 hostIPs: - ip: 192.168.126.11 phase: Running podIP: 10.217.0.82 podIPs: - ip: 10.217.0.82 qosClass: Burstable startTime: '2025-12-13T06:59:15Z' - apiVersion: v1 kind: Pod metadata: annotations: k8s.ovn.org/pod-networks: '{"default":{"ip_addresses":["10.217.0.81/23"],"mac_address":"0a:58:0a:d9:00:51","gateway_ips":["10.217.0.1"],"routes":[{"dest":"10.217.0.0/22","nextHop":"10.217.0.1"},{"dest":"10.217.4.0/23","nextHop":"10.217.0.1"},{"dest":"169.254.0.5/32","nextHop":"10.217.0.1"},{"dest":"100.64.0.0/16","nextHop":"10.217.0.1"}],"ip_address":"10.217.0.81/23","gateway_ip":"10.217.0.1","role":"primary"}}' k8s.v1.cni.cncf.io/network-status: |- [{ "name": "ovn-kubernetes", "interface": "eth0", "ips": [ "10.217.0.81" ], "mac": "0a:58:0a:d9:00:51", "default": true, "dns": {} }] kubectl.kubernetes.io/default-container: manager openshift.io/scc: anyuid creationTimestamp: '2025-12-13T06:59:15Z' generateName: placement-operator-controller-manager-8665b56d78- labels: app.kubernetes.io/name: placement-operator control-plane: controller-manager openstack.org/operator-name: placement pod-template-hash: 8665b56d78 managedFields: - apiVersion: v1 fieldsType: FieldsV1 fieldsV1: f:metadata: f:annotations: f:k8s.ovn.org/pod-networks: {} manager: crc operation: Update subresource: status time: '2025-12-13T06:59:15Z' - apiVersion: v1 fieldsType: FieldsV1 fieldsV1: f:metadata: f:annotations: .: {} f:kubectl.kubernetes.io/default-container: {} f:generateName: {} f:labels: .: {} f:app.kubernetes.io/name: {} f:control-plane: {} f:openstack.org/operator-name: {} f:pod-template-hash: {} f:ownerReferences: .: {} k:{"uid":"71d557af-44de-4707-a8d7-c745bddf6acc"}: {} f:spec: f:containers: k:{"name":"manager"}: .: {} f:args: {} f:command: {} f:env: .: {} k:{"name":"ENABLE_WEBHOOKS"}: .: {} f:name: {} f:value: {} k:{"name":"LEASE_DURATION"}: .: {} f:name: {} f:value: {} k:{"name":"METRICS_CERTS"}: .: {} f:name: {} f:value: {} k:{"name":"RENEW_DEADLINE"}: .: {} f:name: {} f:value: {} k:{"name":"RETRY_PERIOD"}: .: {} f:name: {} f:value: {} f:image: {} f:imagePullPolicy: {} f:livenessProbe: .: {} f:failureThreshold: {} f:httpGet: .: {} f:path: {} f:port: {} f:scheme: {} f:initialDelaySeconds: {} f:periodSeconds: {} f:successThreshold: {} f:timeoutSeconds: {} f:name: {} f:readinessProbe: .: {} f:failureThreshold: {} f:httpGet: .: {} f:path: {} f:port: {} f:scheme: {} f:initialDelaySeconds: {} f:periodSeconds: {} f:successThreshold: {} f:timeoutSeconds: {} f:resources: .: {} f:limits: .: {} f:cpu: {} f:memory: {} f:requests: .: {} f:cpu: {} f:memory: {} f:securityContext: .: {} f:allowPrivilegeEscalation: {} f:terminationMessagePath: {} f:terminationMessagePolicy: {} f:dnsPolicy: {} f:enableServiceLinks: {} f:restartPolicy: {} f:schedulerName: {} f:securityContext: .: {} f:runAsNonRoot: {} f:serviceAccount: {} f:serviceAccountName: {} f:terminationGracePeriodSeconds: {} f:tolerations: {} manager: kube-controller-manager operation: Update time: '2025-12-13T06:59:15Z' - apiVersion: v1 fieldsType: FieldsV1 fieldsV1: f:metadata: f:annotations: f:k8s.v1.cni.cncf.io/network-status: {} manager: multus-daemon operation: Update subresource: status time: '2025-12-13T06:59:16Z' - apiVersion: v1 fieldsType: FieldsV1 fieldsV1: f:status: f:conditions: k:{"type":"ContainersReady"}: .: {} f:lastProbeTime: {} f:lastTransitionTime: {} f:status: {} f:type: {} k:{"type":"Initialized"}: .: {} f:lastProbeTime: {} f:lastTransitionTime: {} f:status: {} f:type: {} k:{"type":"PodReadyToStartContainers"}: .: {} f:lastProbeTime: {} f:lastTransitionTime: {} f:status: {} f:type: {} k:{"type":"Ready"}: .: {} f:lastProbeTime: {} f:lastTransitionTime: {} f:status: {} f:type: {} f:containerStatuses: {} f:hostIP: {} f:hostIPs: {} f:phase: {} f:podIP: {} f:podIPs: .: {} k:{"ip":"10.217.0.81"}: .: {} f:ip: {} f:startTime: {} manager: kubelet operation: Update subresource: status time: '2025-12-13T06:59:56Z' name: placement-operator-controller-manager-8665b56d78-kfnbp namespace: openstack-operators ownerReferences: - apiVersion: apps/v1 blockOwnerDeletion: true controller: true kind: ReplicaSet name: placement-operator-controller-manager-8665b56d78 uid: 71d557af-44de-4707-a8d7-c745bddf6acc resourceVersion: '37081' uid: 2e6fefac-bf85-4f28-a30d-808e83a13141 spec: containers: - args: - --leader-elect - --health-probe-bind-address=:8081 - --metrics-bind-address=127.0.0.1:8080 command: - /manager env: - name: LEASE_DURATION value: '30' - name: RENEW_DEADLINE value: '20' - name: RETRY_PERIOD value: '5' - name: ENABLE_WEBHOOKS value: 'false' - name: METRICS_CERTS value: 'false' image: quay.io/openstack-k8s-operators/placement-operator@sha256:d29650b006da97eb9178fcc58f2eb9fead8c2b414fac18f86a3c3a1507488c4f imagePullPolicy: IfNotPresent livenessProbe: failureThreshold: 3 httpGet: path: /healthz port: 8081 scheme: HTTP initialDelaySeconds: 15 periodSeconds: 20 successThreshold: 1 timeoutSeconds: 1 name: manager readinessProbe: failureThreshold: 3 httpGet: path: /readyz port: 8081 scheme: HTTP initialDelaySeconds: 5 periodSeconds: 10 successThreshold: 1 timeoutSeconds: 1 resources: limits: cpu: 500m memory: 512Mi requests: cpu: 10m memory: 256Mi securityContext: allowPrivilegeEscalation: false capabilities: drop: - MKNOD terminationMessagePath: /dev/termination-log terminationMessagePolicy: File volumeMounts: - mountPath: /var/run/secrets/kubernetes.io/serviceaccount name: kube-api-access-rphhh readOnly: true dnsPolicy: ClusterFirst enableServiceLinks: true imagePullSecrets: - name: placement-operator-controller-manager-dockercfg-7hppm nodeName: crc preemptionPolicy: PreemptLowerPriority priority: 0 restartPolicy: Always schedulerName: default-scheduler securityContext: runAsNonRoot: true seLinuxOptions: level: s0:c26,c5 serviceAccount: placement-operator-controller-manager serviceAccountName: placement-operator-controller-manager terminationGracePeriodSeconds: 10 tolerations: - effect: NoExecute key: node.kubernetes.io/not-ready operator: Exists tolerationSeconds: 120 - effect: NoExecute key: node.kubernetes.io/unreachable operator: Exists tolerationSeconds: 120 - effect: NoSchedule key: node.kubernetes.io/memory-pressure operator: Exists volumes: - name: kube-api-access-rphhh projected: defaultMode: 420 sources: - serviceAccountToken: expirationSeconds: 3607 path: token - configMap: items: - key: ca.crt path: ca.crt name: kube-root-ca.crt - downwardAPI: items: - fieldRef: apiVersion: v1 fieldPath: metadata.namespace path: namespace - configMap: items: - key: service-ca.crt path: service-ca.crt name: openshift-service-ca.crt status: conditions: - lastProbeTime: null lastTransitionTime: '2025-12-13T06:59:17Z' status: 'True' type: PodReadyToStartContainers - lastProbeTime: null lastTransitionTime: '2025-12-13T06:59:15Z' status: 'True' type: Initialized - lastProbeTime: null lastTransitionTime: '2025-12-13T06:59:56Z' status: 'True' type: Ready - lastProbeTime: null lastTransitionTime: '2025-12-13T06:59:56Z' status: 'True' type: ContainersReady - lastProbeTime: null lastTransitionTime: '2025-12-13T06:59:15Z' status: 'True' type: PodScheduled containerStatuses: - containerID: cri-o://ca199da85d4bfdd3168e9cb9463e1cc5cc6ca49d9e3ef9f15cfd22e988a5ea5f image: quay.io/openstack-k8s-operators/placement-operator@sha256:d29650b006da97eb9178fcc58f2eb9fead8c2b414fac18f86a3c3a1507488c4f imageID: quay.io/openstack-k8s-operators/placement-operator@sha256:d29650b006da97eb9178fcc58f2eb9fead8c2b414fac18f86a3c3a1507488c4f lastState: {} name: manager ready: true restartCount: 0 started: true state: running: startedAt: '2025-12-13T06:59:42Z' volumeMounts: - mountPath: /var/run/secrets/kubernetes.io/serviceaccount name: kube-api-access-rphhh readOnly: true recursiveReadOnly: Disabled hostIP: 192.168.126.11 hostIPs: - ip: 192.168.126.11 phase: Running podIP: 10.217.0.81 podIPs: - ip: 10.217.0.81 qosClass: Burstable startTime: '2025-12-13T06:59:15Z' - apiVersion: v1 kind: Pod metadata: annotations: k8s.ovn.org/pod-networks: '{"default":{"ip_addresses":["10.217.0.88/23"],"mac_address":"0a:58:0a:d9:00:58","gateway_ips":["10.217.0.1"],"routes":[{"dest":"10.217.0.0/22","nextHop":"10.217.0.1"},{"dest":"10.217.4.0/23","nextHop":"10.217.0.1"},{"dest":"169.254.0.5/32","nextHop":"10.217.0.1"},{"dest":"100.64.0.0/16","nextHop":"10.217.0.1"}],"ip_address":"10.217.0.88/23","gateway_ip":"10.217.0.1","role":"primary"}}' k8s.v1.cni.cncf.io/network-status: |- [{ "name": "ovn-kubernetes", "interface": "eth0", "ips": [ "10.217.0.88" ], "mac": "0a:58:0a:d9:00:58", "default": true, "dns": {} }] openshift.io/scc: restricted-v2 seccomp.security.alpha.kubernetes.io/pod: runtime/default creationTimestamp: '2025-12-13T06:59:15Z' generateName: rabbitmq-cluster-operator-manager-668c99d594- labels: app.kubernetes.io/component: rabbitmq-operator app.kubernetes.io/name: rabbitmq-cluster-operator app.kubernetes.io/part-of: rabbitmq pod-template-hash: 668c99d594 managedFields: - apiVersion: v1 fieldsType: FieldsV1 fieldsV1: f:metadata: f:annotations: f:k8s.ovn.org/pod-networks: {} manager: crc operation: Update subresource: status time: '2025-12-13T06:59:15Z' - apiVersion: v1 fieldsType: FieldsV1 fieldsV1: f:metadata: f:generateName: {} f:labels: .: {} f:app.kubernetes.io/component: {} f:app.kubernetes.io/name: {} f:app.kubernetes.io/part-of: {} f:pod-template-hash: {} f:ownerReferences: .: {} k:{"uid":"6a1f5181-d8bf-4fa5-8de0-df8dafe80c70"}: {} f:spec: f:containers: k:{"name":"operator"}: .: {} f:command: {} f:env: .: {} k:{"name":"ENABLE_WEBHOOKS"}: .: {} f:name: {} f:value: {} k:{"name":"LEASE_DURATION"}: .: {} f:name: {} f:value: {} k:{"name":"METRICS_CERTS"}: .: {} f:name: {} f:value: {} k:{"name":"OPERATOR_NAMESPACE"}: .: {} f:name: {} f:valueFrom: .: {} f:fieldRef: {} k:{"name":"RENEW_DEADLINE"}: .: {} f:name: {} f:value: {} k:{"name":"RETRY_PERIOD"}: .: {} f:name: {} f:value: {} f:image: {} f:imagePullPolicy: {} f:name: {} f:ports: .: {} k:{"containerPort":9782,"protocol":"TCP"}: .: {} f:containerPort: {} f:name: {} f:protocol: {} f:resources: .: {} f:limits: .: {} f:cpu: {} f:memory: {} f:requests: .: {} f:cpu: {} f:memory: {} f:terminationMessagePath: {} f:terminationMessagePolicy: {} f:dnsPolicy: {} f:enableServiceLinks: {} f:restartPolicy: {} f:schedulerName: {} f:securityContext: {} f:serviceAccount: {} f:serviceAccountName: {} f:terminationGracePeriodSeconds: {} f:tolerations: {} manager: kube-controller-manager operation: Update time: '2025-12-13T06:59:15Z' - apiVersion: v1 fieldsType: FieldsV1 fieldsV1: f:metadata: f:annotations: f:k8s.v1.cni.cncf.io/network-status: {} manager: multus-daemon operation: Update subresource: status time: '2025-12-13T06:59:16Z' - apiVersion: v1 fieldsType: FieldsV1 fieldsV1: f:status: f:conditions: k:{"type":"ContainersReady"}: .: {} f:lastProbeTime: {} f:lastTransitionTime: {} f:status: {} f:type: {} k:{"type":"Initialized"}: .: {} f:lastProbeTime: {} f:lastTransitionTime: {} f:status: {} f:type: {} k:{"type":"PodReadyToStartContainers"}: .: {} f:lastProbeTime: {} f:lastTransitionTime: {} f:status: {} f:type: {} k:{"type":"Ready"}: .: {} f:lastProbeTime: {} f:lastTransitionTime: {} f:status: {} f:type: {} f:containerStatuses: {} f:hostIP: {} f:hostIPs: {} f:phase: {} f:podIP: {} f:podIPs: .: {} k:{"ip":"10.217.0.88"}: .: {} f:ip: {} f:startTime: {} manager: kubelet operation: Update subresource: status time: '2025-12-13T06:59:43Z' name: rabbitmq-cluster-operator-manager-668c99d594-gfl6d namespace: openstack-operators ownerReferences: - apiVersion: apps/v1 blockOwnerDeletion: true controller: true kind: ReplicaSet name: rabbitmq-cluster-operator-manager-668c99d594 uid: 6a1f5181-d8bf-4fa5-8de0-df8dafe80c70 resourceVersion: '36906' uid: 088c2258-52fc-4a04-b4c8-af259e9d2b75 spec: containers: - command: - /manager env: - name: OPERATOR_NAMESPACE valueFrom: fieldRef: apiVersion: v1 fieldPath: metadata.namespace - name: LEASE_DURATION value: '30' - name: RENEW_DEADLINE value: '20' - name: RETRY_PERIOD value: '5' - name: ENABLE_WEBHOOKS value: 'false' - name: METRICS_CERTS value: 'false' image: quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2 imagePullPolicy: IfNotPresent name: operator ports: - containerPort: 9782 name: metrics protocol: TCP resources: limits: cpu: 200m memory: 500Mi requests: cpu: 5m memory: 64Mi securityContext: allowPrivilegeEscalation: false capabilities: drop: - ALL runAsNonRoot: true runAsUser: 1000660000 terminationMessagePath: /dev/termination-log terminationMessagePolicy: File volumeMounts: - mountPath: /var/run/secrets/kubernetes.io/serviceaccount name: kube-api-access-h72nz readOnly: true dnsPolicy: ClusterFirst enableServiceLinks: true imagePullSecrets: - name: rabbitmq-cluster-operator-controller-manager-dockercfg-kwzgc nodeName: crc preemptionPolicy: PreemptLowerPriority priority: 0 restartPolicy: Always schedulerName: default-scheduler securityContext: fsGroup: 1000660000 seLinuxOptions: level: s0:c26,c5 seccompProfile: type: RuntimeDefault serviceAccount: rabbitmq-cluster-operator-controller-manager serviceAccountName: rabbitmq-cluster-operator-controller-manager terminationGracePeriodSeconds: 10 tolerations: - effect: NoExecute key: node.kubernetes.io/not-ready operator: Exists tolerationSeconds: 120 - effect: NoExecute key: node.kubernetes.io/unreachable operator: Exists tolerationSeconds: 120 - effect: NoSchedule key: node.kubernetes.io/memory-pressure operator: Exists volumes: - name: kube-api-access-h72nz projected: defaultMode: 420 sources: - serviceAccountToken: expirationSeconds: 3607 path: token - configMap: items: - key: ca.crt path: ca.crt name: kube-root-ca.crt - downwardAPI: items: - fieldRef: apiVersion: v1 fieldPath: metadata.namespace path: namespace - configMap: items: - key: service-ca.crt path: service-ca.crt name: openshift-service-ca.crt status: conditions: - lastProbeTime: null lastTransitionTime: '2025-12-13T06:59:17Z' status: 'True' type: PodReadyToStartContainers - lastProbeTime: null lastTransitionTime: '2025-12-13T06:59:16Z' status: 'True' type: Initialized - lastProbeTime: null lastTransitionTime: '2025-12-13T06:59:43Z' status: 'True' type: Ready - lastProbeTime: null lastTransitionTime: '2025-12-13T06:59:43Z' status: 'True' type: ContainersReady - lastProbeTime: null lastTransitionTime: '2025-12-13T06:59:15Z' status: 'True' type: PodScheduled containerStatuses: - containerID: cri-o://cbdd9ca1dcb8ebf189ddc12e073134809b443a8a42a0e277186a88f7aaa52b77 image: quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2 imageID: quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2 lastState: {} name: operator ready: true restartCount: 0 started: true state: running: startedAt: '2025-12-13T06:59:42Z' volumeMounts: - mountPath: /var/run/secrets/kubernetes.io/serviceaccount name: kube-api-access-h72nz readOnly: true recursiveReadOnly: Disabled hostIP: 192.168.126.11 hostIPs: - ip: 192.168.126.11 phase: Running podIP: 10.217.0.88 podIPs: - ip: 10.217.0.88 qosClass: Burstable startTime: '2025-12-13T06:59:16Z' - apiVersion: v1 kind: Pod metadata: annotations: k8s.ovn.org/pod-networks: '{"default":{"ip_addresses":["10.217.0.83/23"],"mac_address":"0a:58:0a:d9:00:53","gateway_ips":["10.217.0.1"],"routes":[{"dest":"10.217.0.0/22","nextHop":"10.217.0.1"},{"dest":"10.217.4.0/23","nextHop":"10.217.0.1"},{"dest":"169.254.0.5/32","nextHop":"10.217.0.1"},{"dest":"100.64.0.0/16","nextHop":"10.217.0.1"}],"ip_address":"10.217.0.83/23","gateway_ip":"10.217.0.1","role":"primary"}}' k8s.v1.cni.cncf.io/network-status: |- [{ "name": "ovn-kubernetes", "interface": "eth0", "ips": [ "10.217.0.83" ], "mac": "0a:58:0a:d9:00:53", "default": true, "dns": {} }] kubectl.kubernetes.io/default-container: manager openshift.io/scc: restricted-v2 seccomp.security.alpha.kubernetes.io/pod: runtime/default creationTimestamp: '2025-12-13T06:59:15Z' generateName: swift-operator-controller-manager-5c6df8f9- labels: app.kubernetes.io/name: swift-operator control-plane: controller-manager openstack.org/operator-name: swift pod-template-hash: 5c6df8f9 managedFields: - apiVersion: v1 fieldsType: FieldsV1 fieldsV1: f:metadata: f:annotations: f:k8s.ovn.org/pod-networks: {} manager: crc operation: Update subresource: status time: '2025-12-13T06:59:15Z' - apiVersion: v1 fieldsType: FieldsV1 fieldsV1: f:metadata: f:annotations: .: {} f:kubectl.kubernetes.io/default-container: {} f:generateName: {} f:labels: .: {} f:app.kubernetes.io/name: {} f:control-plane: {} f:openstack.org/operator-name: {} f:pod-template-hash: {} f:ownerReferences: .: {} k:{"uid":"d87d08ef-0ca0-4860-8cfb-a8b5dbb88e49"}: {} f:spec: f:containers: k:{"name":"manager"}: .: {} f:args: {} f:command: {} f:env: .: {} k:{"name":"ENABLE_WEBHOOKS"}: .: {} f:name: {} f:value: {} k:{"name":"LEASE_DURATION"}: .: {} f:name: {} f:value: {} k:{"name":"METRICS_CERTS"}: .: {} f:name: {} f:value: {} k:{"name":"RENEW_DEADLINE"}: .: {} f:name: {} f:value: {} k:{"name":"RETRY_PERIOD"}: .: {} f:name: {} f:value: {} f:image: {} f:imagePullPolicy: {} f:livenessProbe: .: {} f:failureThreshold: {} f:httpGet: .: {} f:path: {} f:port: {} f:scheme: {} f:initialDelaySeconds: {} f:periodSeconds: {} f:successThreshold: {} f:timeoutSeconds: {} f:name: {} f:readinessProbe: .: {} f:failureThreshold: {} f:httpGet: .: {} f:path: {} f:port: {} f:scheme: {} f:initialDelaySeconds: {} f:periodSeconds: {} f:successThreshold: {} f:timeoutSeconds: {} f:resources: .: {} f:limits: .: {} f:cpu: {} f:memory: {} f:requests: .: {} f:cpu: {} f:memory: {} f:securityContext: .: {} f:allowPrivilegeEscalation: {} f:terminationMessagePath: {} f:terminationMessagePolicy: {} f:dnsPolicy: {} f:enableServiceLinks: {} f:restartPolicy: {} f:schedulerName: {} f:securityContext: .: {} f:runAsNonRoot: {} f:serviceAccount: {} f:serviceAccountName: {} f:terminationGracePeriodSeconds: {} f:tolerations: {} manager: kube-controller-manager operation: Update time: '2025-12-13T06:59:15Z' - apiVersion: v1 fieldsType: FieldsV1 fieldsV1: f:metadata: f:annotations: f:k8s.v1.cni.cncf.io/network-status: {} manager: multus-daemon operation: Update subresource: status time: '2025-12-13T06:59:16Z' - apiVersion: v1 fieldsType: FieldsV1 fieldsV1: f:status: f:conditions: k:{"type":"ContainersReady"}: .: {} f:lastProbeTime: {} f:lastTransitionTime: {} f:status: {} f:type: {} k:{"type":"Initialized"}: .: {} f:lastProbeTime: {} f:lastTransitionTime: {} f:status: {} f:type: {} k:{"type":"PodReadyToStartContainers"}: .: {} f:lastProbeTime: {} f:lastTransitionTime: {} f:status: {} f:type: {} k:{"type":"Ready"}: .: {} f:lastProbeTime: {} f:lastTransitionTime: {} f:status: {} f:type: {} f:containerStatuses: {} f:hostIP: {} f:hostIPs: {} f:phase: {} f:podIP: {} f:podIPs: .: {} k:{"ip":"10.217.0.83"}: .: {} f:ip: {} f:startTime: {} manager: kubelet operation: Update subresource: status time: '2025-12-13T06:59:56Z' name: swift-operator-controller-manager-5c6df8f9-bvc6f namespace: openstack-operators ownerReferences: - apiVersion: apps/v1 blockOwnerDeletion: true controller: true kind: ReplicaSet name: swift-operator-controller-manager-5c6df8f9 uid: d87d08ef-0ca0-4860-8cfb-a8b5dbb88e49 resourceVersion: '37086' uid: da649804-862c-45db-97ee-ad47fed7a72d spec: containers: - args: - --leader-elect - --health-probe-bind-address=:8081 - --metrics-bind-address=127.0.0.1:8080 command: - /manager env: - name: LEASE_DURATION value: '30' - name: RENEW_DEADLINE value: '20' - name: RETRY_PERIOD value: '5' - name: ENABLE_WEBHOOKS value: 'false' - name: METRICS_CERTS value: 'false' image: quay.io/openstack-k8s-operators/swift-operator@sha256:3aa109bb973253ae9dcf339b9b65abbd1176cdb4be672c93e538a5f113816991 imagePullPolicy: IfNotPresent livenessProbe: failureThreshold: 3 httpGet: path: /healthz port: 8081 scheme: HTTP initialDelaySeconds: 15 periodSeconds: 20 successThreshold: 1 timeoutSeconds: 1 name: manager readinessProbe: failureThreshold: 3 httpGet: path: /readyz port: 8081 scheme: HTTP initialDelaySeconds: 5 periodSeconds: 10 successThreshold: 1 timeoutSeconds: 1 resources: limits: cpu: 500m memory: 512Mi requests: cpu: 10m memory: 256Mi securityContext: allowPrivilegeEscalation: false capabilities: drop: - ALL runAsUser: 1000660000 terminationMessagePath: /dev/termination-log terminationMessagePolicy: File volumeMounts: - mountPath: /var/run/secrets/kubernetes.io/serviceaccount name: kube-api-access-fd75c readOnly: true dnsPolicy: ClusterFirst enableServiceLinks: true imagePullSecrets: - name: swift-operator-controller-manager-dockercfg-zxk9q nodeName: crc preemptionPolicy: PreemptLowerPriority priority: 0 restartPolicy: Always schedulerName: default-scheduler securityContext: fsGroup: 1000660000 runAsNonRoot: true seLinuxOptions: level: s0:c26,c5 seccompProfile: type: RuntimeDefault serviceAccount: swift-operator-controller-manager serviceAccountName: swift-operator-controller-manager terminationGracePeriodSeconds: 10 tolerations: - effect: NoExecute key: node.kubernetes.io/not-ready operator: Exists tolerationSeconds: 120 - effect: NoExecute key: node.kubernetes.io/unreachable operator: Exists tolerationSeconds: 120 - effect: NoSchedule key: node.kubernetes.io/memory-pressure operator: Exists volumes: - name: kube-api-access-fd75c projected: defaultMode: 420 sources: - serviceAccountToken: expirationSeconds: 3607 path: token - configMap: items: - key: ca.crt path: ca.crt name: kube-root-ca.crt - downwardAPI: items: - fieldRef: apiVersion: v1 fieldPath: metadata.namespace path: namespace - configMap: items: - key: service-ca.crt path: service-ca.crt name: openshift-service-ca.crt status: conditions: - lastProbeTime: null lastTransitionTime: '2025-12-13T06:59:17Z' status: 'True' type: PodReadyToStartContainers - lastProbeTime: null lastTransitionTime: '2025-12-13T06:59:15Z' status: 'True' type: Initialized - lastProbeTime: null lastTransitionTime: '2025-12-13T06:59:56Z' status: 'True' type: Ready - lastProbeTime: null lastTransitionTime: '2025-12-13T06:59:56Z' status: 'True' type: ContainersReady - lastProbeTime: null lastTransitionTime: '2025-12-13T06:59:15Z' status: 'True' type: PodScheduled containerStatuses: - containerID: cri-o://1d6588731f12410f1cbc0f53fd486540491ba5123707114bf9953a20e0a7565a image: quay.io/openstack-k8s-operators/swift-operator@sha256:3aa109bb973253ae9dcf339b9b65abbd1176cdb4be672c93e538a5f113816991 imageID: quay.io/openstack-k8s-operators/swift-operator@sha256:3aa109bb973253ae9dcf339b9b65abbd1176cdb4be672c93e538a5f113816991 lastState: {} name: manager ready: true restartCount: 0 started: true state: running: startedAt: '2025-12-13T06:59:42Z' volumeMounts: - mountPath: /var/run/secrets/kubernetes.io/serviceaccount name: kube-api-access-fd75c readOnly: true recursiveReadOnly: Disabled hostIP: 192.168.126.11 hostIPs: - ip: 192.168.126.11 phase: Running podIP: 10.217.0.83 podIPs: - ip: 10.217.0.83 qosClass: Burstable startTime: '2025-12-13T06:59:15Z' - apiVersion: v1 kind: Pod metadata: annotations: k8s.ovn.org/pod-networks: '{"default":{"ip_addresses":["10.217.0.84/23"],"mac_address":"0a:58:0a:d9:00:54","gateway_ips":["10.217.0.1"],"routes":[{"dest":"10.217.0.0/22","nextHop":"10.217.0.1"},{"dest":"10.217.4.0/23","nextHop":"10.217.0.1"},{"dest":"169.254.0.5/32","nextHop":"10.217.0.1"},{"dest":"100.64.0.0/16","nextHop":"10.217.0.1"}],"ip_address":"10.217.0.84/23","gateway_ip":"10.217.0.1","role":"primary"}}' k8s.v1.cni.cncf.io/network-status: |- [{ "name": "ovn-kubernetes", "interface": "eth0", "ips": [ "10.217.0.84" ], "mac": "0a:58:0a:d9:00:54", "default": true, "dns": {} }] kubectl.kubernetes.io/default-container: manager openshift.io/scc: anyuid creationTimestamp: '2025-12-13T06:59:15Z' generateName: telemetry-operator-controller-manager-97d456b9- labels: app.kubernetes.io/name: telemetry-operator control-plane: controller-manager openstack.org/operator-name: telemetry pod-template-hash: 97d456b9 managedFields: - apiVersion: v1 fieldsType: FieldsV1 fieldsV1: f:metadata: f:annotations: f:k8s.ovn.org/pod-networks: {} manager: crc operation: Update subresource: status time: '2025-12-13T06:59:15Z' - apiVersion: v1 fieldsType: FieldsV1 fieldsV1: f:metadata: f:annotations: .: {} f:kubectl.kubernetes.io/default-container: {} f:generateName: {} f:labels: .: {} f:app.kubernetes.io/name: {} f:control-plane: {} f:openstack.org/operator-name: {} f:pod-template-hash: {} f:ownerReferences: .: {} k:{"uid":"e3a431b9-25d9-4d5d-9658-78afd766d2b3"}: {} f:spec: f:containers: k:{"name":"manager"}: .: {} f:args: {} f:command: {} f:env: .: {} k:{"name":"ENABLE_WEBHOOKS"}: .: {} f:name: {} f:value: {} k:{"name":"LEASE_DURATION"}: .: {} f:name: {} f:value: {} k:{"name":"METRICS_CERTS"}: .: {} f:name: {} f:value: {} k:{"name":"RENEW_DEADLINE"}: .: {} f:name: {} f:value: {} k:{"name":"RETRY_PERIOD"}: .: {} f:name: {} f:value: {} f:image: {} f:imagePullPolicy: {} f:livenessProbe: .: {} f:failureThreshold: {} f:httpGet: .: {} f:path: {} f:port: {} f:scheme: {} f:initialDelaySeconds: {} f:periodSeconds: {} f:successThreshold: {} f:timeoutSeconds: {} f:name: {} f:readinessProbe: .: {} f:failureThreshold: {} f:httpGet: .: {} f:path: {} f:port: {} f:scheme: {} f:initialDelaySeconds: {} f:periodSeconds: {} f:successThreshold: {} f:timeoutSeconds: {} f:resources: .: {} f:limits: .: {} f:cpu: {} f:memory: {} f:requests: .: {} f:cpu: {} f:memory: {} f:securityContext: .: {} f:allowPrivilegeEscalation: {} f:terminationMessagePath: {} f:terminationMessagePolicy: {} f:dnsPolicy: {} f:enableServiceLinks: {} f:restartPolicy: {} f:schedulerName: {} f:securityContext: .: {} f:runAsNonRoot: {} f:serviceAccount: {} f:serviceAccountName: {} f:terminationGracePeriodSeconds: {} f:tolerations: {} manager: kube-controller-manager operation: Update time: '2025-12-13T06:59:15Z' - apiVersion: v1 fieldsType: FieldsV1 fieldsV1: f:metadata: f:annotations: f:k8s.v1.cni.cncf.io/network-status: {} manager: multus-daemon operation: Update subresource: status time: '2025-12-13T06:59:16Z' - apiVersion: v1 fieldsType: FieldsV1 fieldsV1: f:status: f:conditions: k:{"type":"ContainersReady"}: .: {} f:lastProbeTime: {} f:lastTransitionTime: {} f:status: {} f:type: {} k:{"type":"Initialized"}: .: {} f:lastProbeTime: {} f:lastTransitionTime: {} f:status: {} f:type: {} k:{"type":"PodReadyToStartContainers"}: .: {} f:lastProbeTime: {} f:lastTransitionTime: {} f:status: {} f:type: {} k:{"type":"Ready"}: .: {} f:lastProbeTime: {} f:lastTransitionTime: {} f:status: {} f:type: {} f:containerStatuses: {} f:hostIP: {} f:hostIPs: {} f:phase: {} f:podIP: {} f:podIPs: .: {} k:{"ip":"10.217.0.84"}: .: {} f:ip: {} f:startTime: {} manager: kubelet operation: Update subresource: status time: '2025-12-13T06:59:36Z' name: telemetry-operator-controller-manager-97d456b9-fqsfn namespace: openstack-operators ownerReferences: - apiVersion: apps/v1 blockOwnerDeletion: true controller: true kind: ReplicaSet name: telemetry-operator-controller-manager-97d456b9 uid: e3a431b9-25d9-4d5d-9658-78afd766d2b3 resourceVersion: '36661' uid: 1adec510-a153-47b2-ae1d-5430d4ff5e31 spec: containers: - args: - --leader-elect - --health-probe-bind-address=:8081 - --metrics-bind-address=127.0.0.1:8080 command: - /manager env: - name: LEASE_DURATION value: '30' - name: RENEW_DEADLINE value: '20' - name: RETRY_PERIOD value: '5' - name: ENABLE_WEBHOOKS value: 'false' - name: METRICS_CERTS value: 'false' image: quay.io/openstack-k8s-operators/telemetry-operator@sha256:f27e732ec1faee765461bf137d9be81278b2fa39675019a73622755e1e610b6f imagePullPolicy: IfNotPresent livenessProbe: failureThreshold: 3 httpGet: path: /healthz port: 8081 scheme: HTTP initialDelaySeconds: 15 periodSeconds: 20 successThreshold: 1 timeoutSeconds: 1 name: manager readinessProbe: failureThreshold: 3 httpGet: path: /readyz port: 8081 scheme: HTTP initialDelaySeconds: 5 periodSeconds: 10 successThreshold: 1 timeoutSeconds: 1 resources: limits: cpu: 500m memory: 512Mi requests: cpu: 10m memory: 256Mi securityContext: allowPrivilegeEscalation: false capabilities: drop: - MKNOD terminationMessagePath: /dev/termination-log terminationMessagePolicy: File volumeMounts: - mountPath: /var/run/secrets/kubernetes.io/serviceaccount name: kube-api-access-jxgdf readOnly: true dnsPolicy: ClusterFirst enableServiceLinks: true imagePullSecrets: - name: telemetry-operator-controller-manager-dockercfg-nmcvv nodeName: crc preemptionPolicy: PreemptLowerPriority priority: 0 restartPolicy: Always schedulerName: default-scheduler securityContext: runAsNonRoot: true seLinuxOptions: level: s0:c26,c5 serviceAccount: telemetry-operator-controller-manager serviceAccountName: telemetry-operator-controller-manager terminationGracePeriodSeconds: 10 tolerations: - effect: NoExecute key: node.kubernetes.io/not-ready operator: Exists tolerationSeconds: 120 - effect: NoExecute key: node.kubernetes.io/unreachable operator: Exists tolerationSeconds: 120 - effect: NoSchedule key: node.kubernetes.io/memory-pressure operator: Exists volumes: - name: kube-api-access-jxgdf projected: defaultMode: 420 sources: - serviceAccountToken: expirationSeconds: 3607 path: token - configMap: items: - key: ca.crt path: ca.crt name: kube-root-ca.crt - downwardAPI: items: - fieldRef: apiVersion: v1 fieldPath: metadata.namespace path: namespace - configMap: items: - key: service-ca.crt path: service-ca.crt name: openshift-service-ca.crt status: conditions: - lastProbeTime: null lastTransitionTime: '2025-12-13T06:59:25Z' status: 'True' type: PodReadyToStartContainers - lastProbeTime: null lastTransitionTime: '2025-12-13T06:59:15Z' status: 'True' type: Initialized - lastProbeTime: null lastTransitionTime: '2025-12-13T06:59:36Z' status: 'True' type: Ready - lastProbeTime: null lastTransitionTime: '2025-12-13T06:59:36Z' status: 'True' type: ContainersReady - lastProbeTime: null lastTransitionTime: '2025-12-13T06:59:15Z' status: 'True' type: PodScheduled containerStatuses: - containerID: cri-o://6907f0fa860435b3c8725cc3cd499797b690e4e352683c8f14b9d81bc3e5b15a image: quay.io/openstack-k8s-operators/telemetry-operator@sha256:f27e732ec1faee765461bf137d9be81278b2fa39675019a73622755e1e610b6f imageID: quay.io/openstack-k8s-operators/telemetry-operator@sha256:f27e732ec1faee765461bf137d9be81278b2fa39675019a73622755e1e610b6f lastState: {} name: manager ready: true restartCount: 0 started: true state: running: startedAt: '2025-12-13T06:59:25Z' volumeMounts: - mountPath: /var/run/secrets/kubernetes.io/serviceaccount name: kube-api-access-jxgdf readOnly: true recursiveReadOnly: Disabled hostIP: 192.168.126.11 hostIPs: - ip: 192.168.126.11 phase: Running podIP: 10.217.0.84 podIPs: - ip: 10.217.0.84 qosClass: Burstable startTime: '2025-12-13T06:59:15Z' - apiVersion: v1 kind: Pod metadata: annotations: k8s.ovn.org/pod-networks: '{"default":{"ip_addresses":["10.217.0.85/23"],"mac_address":"0a:58:0a:d9:00:55","gateway_ips":["10.217.0.1"],"routes":[{"dest":"10.217.0.0/22","nextHop":"10.217.0.1"},{"dest":"10.217.4.0/23","nextHop":"10.217.0.1"},{"dest":"169.254.0.5/32","nextHop":"10.217.0.1"},{"dest":"100.64.0.0/16","nextHop":"10.217.0.1"}],"ip_address":"10.217.0.85/23","gateway_ip":"10.217.0.1","role":"primary"}}' k8s.v1.cni.cncf.io/network-status: |- [{ "name": "ovn-kubernetes", "interface": "eth0", "ips": [ "10.217.0.85" ], "mac": "0a:58:0a:d9:00:55", "default": true, "dns": {} }] kubectl.kubernetes.io/default-container: manager openshift.io/scc: anyuid creationTimestamp: '2025-12-13T06:59:15Z' generateName: test-operator-controller-manager-756ccf86c7- labels: app.kubernetes.io/name: test-operator control-plane: controller-manager openstack.org/operator-name: test pod-template-hash: 756ccf86c7 managedFields: - apiVersion: v1 fieldsType: FieldsV1 fieldsV1: f:metadata: f:annotations: f:k8s.ovn.org/pod-networks: {} manager: crc operation: Update subresource: status time: '2025-12-13T06:59:15Z' - apiVersion: v1 fieldsType: FieldsV1 fieldsV1: f:metadata: f:annotations: .: {} f:kubectl.kubernetes.io/default-container: {} f:generateName: {} f:labels: .: {} f:app.kubernetes.io/name: {} f:control-plane: {} f:openstack.org/operator-name: {} f:pod-template-hash: {} f:ownerReferences: .: {} k:{"uid":"8595bec6-1d65-4f62-b185-d9dcf3ad05de"}: {} f:spec: f:containers: k:{"name":"manager"}: .: {} f:args: {} f:command: {} f:env: .: {} k:{"name":"ENABLE_WEBHOOKS"}: .: {} f:name: {} f:value: {} k:{"name":"LEASE_DURATION"}: .: {} f:name: {} f:value: {} k:{"name":"METRICS_CERTS"}: .: {} f:name: {} f:value: {} k:{"name":"RENEW_DEADLINE"}: .: {} f:name: {} f:value: {} k:{"name":"RETRY_PERIOD"}: .: {} f:name: {} f:value: {} f:image: {} f:imagePullPolicy: {} f:livenessProbe: .: {} f:failureThreshold: {} f:httpGet: .: {} f:path: {} f:port: {} f:scheme: {} f:initialDelaySeconds: {} f:periodSeconds: {} f:successThreshold: {} f:timeoutSeconds: {} f:name: {} f:readinessProbe: .: {} f:failureThreshold: {} f:httpGet: .: {} f:path: {} f:port: {} f:scheme: {} f:initialDelaySeconds: {} f:periodSeconds: {} f:successThreshold: {} f:timeoutSeconds: {} f:resources: .: {} f:limits: .: {} f:cpu: {} f:memory: {} f:requests: .: {} f:cpu: {} f:memory: {} f:securityContext: .: {} f:allowPrivilegeEscalation: {} f:terminationMessagePath: {} f:terminationMessagePolicy: {} f:dnsPolicy: {} f:enableServiceLinks: {} f:restartPolicy: {} f:schedulerName: {} f:securityContext: .: {} f:runAsNonRoot: {} f:serviceAccount: {} f:serviceAccountName: {} f:terminationGracePeriodSeconds: {} f:tolerations: {} manager: kube-controller-manager operation: Update time: '2025-12-13T06:59:15Z' - apiVersion: v1 fieldsType: FieldsV1 fieldsV1: f:metadata: f:annotations: f:k8s.v1.cni.cncf.io/network-status: {} manager: multus-daemon operation: Update subresource: status time: '2025-12-13T06:59:16Z' - apiVersion: v1 fieldsType: FieldsV1 fieldsV1: f:status: f:conditions: k:{"type":"ContainersReady"}: .: {} f:lastProbeTime: {} f:lastTransitionTime: {} f:status: {} f:type: {} k:{"type":"Initialized"}: .: {} f:lastProbeTime: {} f:lastTransitionTime: {} f:status: {} f:type: {} k:{"type":"PodReadyToStartContainers"}: .: {} f:lastProbeTime: {} f:lastTransitionTime: {} f:status: {} f:type: {} k:{"type":"Ready"}: .: {} f:lastProbeTime: {} f:lastTransitionTime: {} f:status: {} f:type: {} f:containerStatuses: {} f:hostIP: {} f:hostIPs: {} f:phase: {} f:podIP: {} f:podIPs: .: {} k:{"ip":"10.217.0.85"}: .: {} f:ip: {} f:startTime: {} manager: kubelet operation: Update subresource: status time: '2025-12-13T06:59:36Z' name: test-operator-controller-manager-756ccf86c7-46n8m namespace: openstack-operators ownerReferences: - apiVersion: apps/v1 blockOwnerDeletion: true controller: true kind: ReplicaSet name: test-operator-controller-manager-756ccf86c7 uid: 8595bec6-1d65-4f62-b185-d9dcf3ad05de resourceVersion: '36676' uid: a02dee9b-ffed-4a5a-b833-cb236c105371 spec: containers: - args: - --leader-elect - --health-probe-bind-address=:8081 - --metrics-bind-address=127.0.0.1:8080 command: - /manager env: - name: LEASE_DURATION value: '30' - name: RENEW_DEADLINE value: '20' - name: RETRY_PERIOD value: '5' - name: ENABLE_WEBHOOKS value: 'false' - name: METRICS_CERTS value: 'false' image: quay.io/openstack-k8s-operators/test-operator@sha256:101b3e007d8c9f2e183262d7712f986ad51256448099069bc14f1ea5f997ab94 imagePullPolicy: IfNotPresent livenessProbe: failureThreshold: 3 httpGet: path: /healthz port: 8081 scheme: HTTP initialDelaySeconds: 15 periodSeconds: 20 successThreshold: 1 timeoutSeconds: 1 name: manager readinessProbe: failureThreshold: 3 httpGet: path: /readyz port: 8081 scheme: HTTP initialDelaySeconds: 5 periodSeconds: 10 successThreshold: 1 timeoutSeconds: 1 resources: limits: cpu: 500m memory: 512Mi requests: cpu: 10m memory: 256Mi securityContext: allowPrivilegeEscalation: false capabilities: drop: - MKNOD terminationMessagePath: /dev/termination-log terminationMessagePolicy: File volumeMounts: - mountPath: /var/run/secrets/kubernetes.io/serviceaccount name: kube-api-access-f6684 readOnly: true dnsPolicy: ClusterFirst enableServiceLinks: true imagePullSecrets: - name: test-operator-controller-manager-dockercfg-5p78g nodeName: crc preemptionPolicy: PreemptLowerPriority priority: 0 restartPolicy: Always schedulerName: default-scheduler securityContext: runAsNonRoot: true seLinuxOptions: level: s0:c26,c5 serviceAccount: test-operator-controller-manager serviceAccountName: test-operator-controller-manager terminationGracePeriodSeconds: 10 tolerations: - effect: NoExecute key: node.kubernetes.io/not-ready operator: Exists tolerationSeconds: 120 - effect: NoExecute key: node.kubernetes.io/unreachable operator: Exists tolerationSeconds: 120 - effect: NoSchedule key: node.kubernetes.io/memory-pressure operator: Exists volumes: - name: kube-api-access-f6684 projected: defaultMode: 420 sources: - serviceAccountToken: expirationSeconds: 3607 path: token - configMap: items: - key: ca.crt path: ca.crt name: kube-root-ca.crt - downwardAPI: items: - fieldRef: apiVersion: v1 fieldPath: metadata.namespace path: namespace - configMap: items: - key: service-ca.crt path: service-ca.crt name: openshift-service-ca.crt status: conditions: - lastProbeTime: null lastTransitionTime: '2025-12-13T06:59:25Z' status: 'True' type: PodReadyToStartContainers - lastProbeTime: null lastTransitionTime: '2025-12-13T06:59:15Z' status: 'True' type: Initialized - lastProbeTime: null lastTransitionTime: '2025-12-13T06:59:36Z' status: 'True' type: Ready - lastProbeTime: null lastTransitionTime: '2025-12-13T06:59:36Z' status: 'True' type: ContainersReady - lastProbeTime: null lastTransitionTime: '2025-12-13T06:59:15Z' status: 'True' type: PodScheduled containerStatuses: - containerID: cri-o://455b941779b62ce63ff7a2b6ecbe7872ba833a430246e21299bc983a54c04188 image: quay.io/openstack-k8s-operators/test-operator@sha256:101b3e007d8c9f2e183262d7712f986ad51256448099069bc14f1ea5f997ab94 imageID: quay.io/openstack-k8s-operators/test-operator@sha256:101b3e007d8c9f2e183262d7712f986ad51256448099069bc14f1ea5f997ab94 lastState: {} name: manager ready: true restartCount: 0 started: true state: running: startedAt: '2025-12-13T06:59:25Z' volumeMounts: - mountPath: /var/run/secrets/kubernetes.io/serviceaccount name: kube-api-access-f6684 readOnly: true recursiveReadOnly: Disabled hostIP: 192.168.126.11 hostIPs: - ip: 192.168.126.11 phase: Running podIP: 10.217.0.85 podIPs: - ip: 10.217.0.85 qosClass: Burstable startTime: '2025-12-13T06:59:15Z' - apiVersion: v1 kind: Pod metadata: annotations: k8s.ovn.org/pod-networks: '{"default":{"ip_addresses":["10.217.0.253/23"],"mac_address":"0a:58:0a:d9:00:fd","gateway_ips":["10.217.0.1"],"routes":[{"dest":"10.217.0.0/22","nextHop":"10.217.0.1"},{"dest":"10.217.4.0/23","nextHop":"10.217.0.1"},{"dest":"169.254.0.5/32","nextHop":"10.217.0.1"},{"dest":"100.64.0.0/16","nextHop":"10.217.0.1"}],"ip_address":"10.217.0.253/23","gateway_ip":"10.217.0.1","role":"primary"}}' k8s.v1.cni.cncf.io/network-status: |- [{ "name": "ovn-kubernetes", "interface": "eth0", "ips": [ "10.217.0.253" ], "mac": "0a:58:0a:d9:00:fd", "default": true, "dns": {} }] kubectl.kubernetes.io/default-container: manager openshift.io/scc: anyuid creationTimestamp: '2025-12-13T07:32:39Z' generateName: test-operator-controller-manager-9fc9c756c- labels: app.kubernetes.io/name: test-operator control-plane: controller-manager openstack.org/operator-name: test pod-template-hash: 9fc9c756c managedFields: - apiVersion: v1 fieldsType: FieldsV1 fieldsV1: f:metadata: f:annotations: f:k8s.ovn.org/pod-networks: {} manager: crc operation: Update subresource: status time: '2025-12-13T07:32:39Z' - apiVersion: v1 fieldsType: FieldsV1 fieldsV1: f:metadata: f:annotations: .: {} f:kubectl.kubernetes.io/default-container: {} f:generateName: {} f:labels: .: {} f:app.kubernetes.io/name: {} f:control-plane: {} f:openstack.org/operator-name: {} f:pod-template-hash: {} f:ownerReferences: .: {} k:{"uid":"cc9aca8d-3037-4536-90eb-8a03bf198030"}: {} f:spec: f:containers: k:{"name":"manager"}: .: {} f:args: {} f:command: {} f:env: .: {} k:{"name":"ENABLE_WEBHOOKS"}: .: {} f:name: {} f:value: {} k:{"name":"LEASE_DURATION"}: .: {} f:name: {} f:value: {} k:{"name":"METRICS_CERTS"}: .: {} f:name: {} f:value: {} k:{"name":"RENEW_DEADLINE"}: .: {} f:name: {} f:value: {} k:{"name":"RETRY_PERIOD"}: .: {} f:name: {} f:value: {} f:image: {} f:imagePullPolicy: {} f:livenessProbe: .: {} f:failureThreshold: {} f:httpGet: .: {} f:path: {} f:port: {} f:scheme: {} f:initialDelaySeconds: {} f:periodSeconds: {} f:successThreshold: {} f:timeoutSeconds: {} f:name: {} f:readinessProbe: .: {} f:failureThreshold: {} f:httpGet: .: {} f:path: {} f:port: {} f:scheme: {} f:initialDelaySeconds: {} f:periodSeconds: {} f:successThreshold: {} f:timeoutSeconds: {} f:resources: .: {} f:limits: .: {} f:cpu: {} f:memory: {} f:requests: .: {} f:cpu: {} f:memory: {} f:securityContext: .: {} f:allowPrivilegeEscalation: {} f:terminationMessagePath: {} f:terminationMessagePolicy: {} f:dnsPolicy: {} f:enableServiceLinks: {} f:restartPolicy: {} f:schedulerName: {} f:securityContext: .: {} f:runAsNonRoot: {} f:serviceAccount: {} f:serviceAccountName: {} f:terminationGracePeriodSeconds: {} f:tolerations: {} manager: kube-controller-manager operation: Update time: '2025-12-13T07:32:39Z' - apiVersion: v1 fieldsType: FieldsV1 fieldsV1: f:metadata: f:annotations: f:k8s.v1.cni.cncf.io/network-status: {} manager: multus-daemon operation: Update subresource: status time: '2025-12-13T07:32:39Z' - apiVersion: v1 fieldsType: FieldsV1 fieldsV1: f:status: f:conditions: k:{"type":"ContainersReady"}: .: {} f:lastProbeTime: {} f:lastTransitionTime: {} f:message: {} f:reason: {} f:status: {} f:type: {} k:{"type":"Initialized"}: .: {} f:lastProbeTime: {} f:lastTransitionTime: {} f:status: {} f:type: {} k:{"type":"PodReadyToStartContainers"}: .: {} f:lastProbeTime: {} f:lastTransitionTime: {} f:status: {} f:type: {} k:{"type":"Ready"}: .: {} f:lastProbeTime: {} f:lastTransitionTime: {} f:message: {} f:reason: {} f:status: {} f:type: {} f:containerStatuses: {} f:hostIP: {} f:hostIPs: {} f:podIP: {} f:podIPs: .: {} k:{"ip":"10.217.0.253"}: .: {} f:ip: {} f:startTime: {} manager: kubelet operation: Update subresource: status time: '2025-12-13T07:34:54Z' name: test-operator-controller-manager-9fc9c756c-8sjtq namespace: openstack-operators ownerReferences: - apiVersion: apps/v1 blockOwnerDeletion: true controller: true kind: ReplicaSet name: test-operator-controller-manager-9fc9c756c uid: cc9aca8d-3037-4536-90eb-8a03bf198030 resourceVersion: '63442' uid: 511296bd-fff8-49c1-bbfd-b702905f6e83 spec: containers: - args: - --leader-elect - --health-probe-bind-address=:8081 - --metrics-bind-address=127.0.0.1:8080 command: - /manager env: - name: LEASE_DURATION value: '30' - name: RENEW_DEADLINE value: '20' - name: RETRY_PERIOD value: '5' - name: ENABLE_WEBHOOKS value: 'false' - name: METRICS_CERTS value: 'false' image: 38.129.56.153:5001/openstack-k8s-operators/test-operator:d19f803f400b92d4afd97dd749e753a7435bfaca imagePullPolicy: IfNotPresent livenessProbe: failureThreshold: 3 httpGet: path: /healthz port: 8081 scheme: HTTP initialDelaySeconds: 15 periodSeconds: 20 successThreshold: 1 timeoutSeconds: 1 name: manager readinessProbe: failureThreshold: 3 httpGet: path: /readyz port: 8081 scheme: HTTP initialDelaySeconds: 5 periodSeconds: 10 successThreshold: 1 timeoutSeconds: 1 resources: limits: cpu: 500m memory: 512Mi requests: cpu: 10m memory: 256Mi securityContext: allowPrivilegeEscalation: false capabilities: drop: - MKNOD terminationMessagePath: /dev/termination-log terminationMessagePolicy: File volumeMounts: - mountPath: /var/run/secrets/kubernetes.io/serviceaccount name: kube-api-access-kvtgd readOnly: true dnsPolicy: ClusterFirst enableServiceLinks: true imagePullSecrets: - name: test-operator-controller-manager-dockercfg-5p78g nodeName: crc preemptionPolicy: PreemptLowerPriority priority: 0 restartPolicy: Always schedulerName: default-scheduler securityContext: runAsNonRoot: true seLinuxOptions: level: s0:c26,c5 serviceAccount: test-operator-controller-manager serviceAccountName: test-operator-controller-manager terminationGracePeriodSeconds: 10 tolerations: - effect: NoExecute key: node.kubernetes.io/not-ready operator: Exists tolerationSeconds: 120 - effect: NoExecute key: node.kubernetes.io/unreachable operator: Exists tolerationSeconds: 120 - effect: NoSchedule key: node.kubernetes.io/memory-pressure operator: Exists volumes: - name: kube-api-access-kvtgd projected: defaultMode: 420 sources: - serviceAccountToken: expirationSeconds: 3607 path: token - configMap: items: - key: ca.crt path: ca.crt name: kube-root-ca.crt - downwardAPI: items: - fieldRef: apiVersion: v1 fieldPath: metadata.namespace path: namespace - configMap: items: - key: service-ca.crt path: service-ca.crt name: openshift-service-ca.crt status: conditions: - lastProbeTime: null lastTransitionTime: '2025-12-13T07:34:40Z' status: 'True' type: PodReadyToStartContainers - lastProbeTime: null lastTransitionTime: '2025-12-13T07:32:39Z' status: 'True' type: Initialized - lastProbeTime: null lastTransitionTime: '2025-12-13T07:32:39Z' message: 'containers with unready status: [manager]' reason: ContainersNotReady status: 'False' type: Ready - lastProbeTime: null lastTransitionTime: '2025-12-13T07:32:39Z' message: 'containers with unready status: [manager]' reason: ContainersNotReady status: 'False' type: ContainersReady - lastProbeTime: null lastTransitionTime: '2025-12-13T07:32:39Z' status: 'True' type: PodScheduled containerStatuses: - image: 38.129.56.153:5001/openstack-k8s-operators/test-operator:d19f803f400b92d4afd97dd749e753a7435bfaca imageID: '' lastState: {} name: manager ready: false restartCount: 0 started: false state: waiting: message: Back-off pulling image "38.129.56.153:5001/openstack-k8s-operators/test-operator:d19f803f400b92d4afd97dd749e753a7435bfaca" reason: ImagePullBackOff volumeMounts: - mountPath: /var/run/secrets/kubernetes.io/serviceaccount name: kube-api-access-kvtgd readOnly: true recursiveReadOnly: Disabled hostIP: 192.168.126.11 hostIPs: - ip: 192.168.126.11 phase: Pending podIP: 10.217.0.253 podIPs: - ip: 10.217.0.253 qosClass: Burstable startTime: '2025-12-13T07:32:39Z' - apiVersion: v1 kind: Pod metadata: annotations: k8s.ovn.org/pod-networks: '{"default":{"ip_addresses":["10.217.0.86/23"],"mac_address":"0a:58:0a:d9:00:56","gateway_ips":["10.217.0.1"],"routes":[{"dest":"10.217.0.0/22","nextHop":"10.217.0.1"},{"dest":"10.217.4.0/23","nextHop":"10.217.0.1"},{"dest":"169.254.0.5/32","nextHop":"10.217.0.1"},{"dest":"100.64.0.0/16","nextHop":"10.217.0.1"}],"ip_address":"10.217.0.86/23","gateway_ip":"10.217.0.1","role":"primary"}}' k8s.v1.cni.cncf.io/network-status: |- [{ "name": "ovn-kubernetes", "interface": "eth0", "ips": [ "10.217.0.86" ], "mac": "0a:58:0a:d9:00:56", "default": true, "dns": {} }] kubectl.kubernetes.io/default-container: manager openshift.io/scc: anyuid creationTimestamp: '2025-12-13T06:59:15Z' generateName: watcher-operator-controller-manager-55f78b7c4c- labels: app.kubernetes.io/name: watcher-operator control-plane: controller-manager openstack.org/operator-name: watcher pod-template-hash: 55f78b7c4c managedFields: - apiVersion: v1 fieldsType: FieldsV1 fieldsV1: f:metadata: f:annotations: f:k8s.ovn.org/pod-networks: {} manager: crc operation: Update subresource: status time: '2025-12-13T06:59:15Z' - apiVersion: v1 fieldsType: FieldsV1 fieldsV1: f:metadata: f:annotations: .: {} f:kubectl.kubernetes.io/default-container: {} f:generateName: {} f:labels: .: {} f:app.kubernetes.io/name: {} f:control-plane: {} f:openstack.org/operator-name: {} f:pod-template-hash: {} f:ownerReferences: .: {} k:{"uid":"45511f3d-c14e-48e1-8d95-3054047df722"}: {} f:spec: f:containers: k:{"name":"manager"}: .: {} f:args: {} f:command: {} f:env: .: {} k:{"name":"ENABLE_WEBHOOKS"}: .: {} f:name: {} f:value: {} k:{"name":"LEASE_DURATION"}: .: {} f:name: {} f:value: {} k:{"name":"METRICS_CERTS"}: .: {} f:name: {} f:value: {} k:{"name":"RENEW_DEADLINE"}: .: {} f:name: {} f:value: {} k:{"name":"RETRY_PERIOD"}: .: {} f:name: {} f:value: {} f:image: {} f:imagePullPolicy: {} f:livenessProbe: .: {} f:failureThreshold: {} f:httpGet: .: {} f:path: {} f:port: {} f:scheme: {} f:initialDelaySeconds: {} f:periodSeconds: {} f:successThreshold: {} f:timeoutSeconds: {} f:name: {} f:readinessProbe: .: {} f:failureThreshold: {} f:httpGet: .: {} f:path: {} f:port: {} f:scheme: {} f:initialDelaySeconds: {} f:periodSeconds: {} f:successThreshold: {} f:timeoutSeconds: {} f:resources: .: {} f:limits: .: {} f:cpu: {} f:memory: {} f:requests: .: {} f:cpu: {} f:memory: {} f:securityContext: .: {} f:allowPrivilegeEscalation: {} f:terminationMessagePath: {} f:terminationMessagePolicy: {} f:dnsPolicy: {} f:enableServiceLinks: {} f:restartPolicy: {} f:schedulerName: {} f:securityContext: .: {} f:runAsNonRoot: {} f:serviceAccount: {} f:serviceAccountName: {} f:terminationGracePeriodSeconds: {} f:tolerations: {} manager: kube-controller-manager operation: Update time: '2025-12-13T06:59:15Z' - apiVersion: v1 fieldsType: FieldsV1 fieldsV1: f:metadata: f:annotations: f:k8s.v1.cni.cncf.io/network-status: {} manager: multus-daemon operation: Update subresource: status time: '2025-12-13T06:59:16Z' - apiVersion: v1 fieldsType: FieldsV1 fieldsV1: f:status: f:conditions: k:{"type":"ContainersReady"}: .: {} f:lastProbeTime: {} f:lastTransitionTime: {} f:status: {} f:type: {} k:{"type":"Initialized"}: .: {} f:lastProbeTime: {} f:lastTransitionTime: {} f:status: {} f:type: {} k:{"type":"PodReadyToStartContainers"}: .: {} f:lastProbeTime: {} f:lastTransitionTime: {} f:status: {} f:type: {} k:{"type":"Ready"}: .: {} f:lastProbeTime: {} f:lastTransitionTime: {} f:status: {} f:type: {} f:containerStatuses: {} f:hostIP: {} f:hostIPs: {} f:phase: {} f:podIP: {} f:podIPs: .: {} k:{"ip":"10.217.0.86"}: .: {} f:ip: {} f:startTime: {} manager: kubelet operation: Update subresource: status time: '2025-12-13T06:59:56Z' name: watcher-operator-controller-manager-55f78b7c4c-zgnj9 namespace: openstack-operators ownerReferences: - apiVersion: apps/v1 blockOwnerDeletion: true controller: true kind: ReplicaSet name: watcher-operator-controller-manager-55f78b7c4c uid: 45511f3d-c14e-48e1-8d95-3054047df722 resourceVersion: '37092' uid: 3b75e9d6-b3a1-46ec-ae83-830583970e9c spec: containers: - args: - --leader-elect - --health-probe-bind-address=:8081 - --metrics-bind-address=127.0.0.1:8080 command: - /manager env: - name: LEASE_DURATION value: '30' - name: RENEW_DEADLINE value: '20' - name: RETRY_PERIOD value: '5' - name: ENABLE_WEBHOOKS value: 'false' - name: METRICS_CERTS value: 'false' image: quay.io/openstack-k8s-operators/watcher-operator@sha256:961417d59f527d925ac48ff6a11de747d0493315e496e34dc83d76a1a1fff58a imagePullPolicy: IfNotPresent livenessProbe: failureThreshold: 3 httpGet: path: /healthz port: 8081 scheme: HTTP initialDelaySeconds: 15 periodSeconds: 20 successThreshold: 1 timeoutSeconds: 1 name: manager readinessProbe: failureThreshold: 3 httpGet: path: /readyz port: 8081 scheme: HTTP initialDelaySeconds: 5 periodSeconds: 10 successThreshold: 1 timeoutSeconds: 1 resources: limits: cpu: 500m memory: 512Mi requests: cpu: 10m memory: 256Mi securityContext: allowPrivilegeEscalation: false capabilities: drop: - MKNOD terminationMessagePath: /dev/termination-log terminationMessagePolicy: File volumeMounts: - mountPath: /var/run/secrets/kubernetes.io/serviceaccount name: kube-api-access-stbm5 readOnly: true dnsPolicy: ClusterFirst enableServiceLinks: true imagePullSecrets: - name: watcher-operator-controller-manager-dockercfg-5g6p5 nodeName: crc preemptionPolicy: PreemptLowerPriority priority: 0 restartPolicy: Always schedulerName: default-scheduler securityContext: runAsNonRoot: true seLinuxOptions: level: s0:c26,c5 serviceAccount: watcher-operator-controller-manager serviceAccountName: watcher-operator-controller-manager terminationGracePeriodSeconds: 10 tolerations: - effect: NoExecute key: node.kubernetes.io/not-ready operator: Exists tolerationSeconds: 120 - effect: NoExecute key: node.kubernetes.io/unreachable operator: Exists tolerationSeconds: 120 - effect: NoSchedule key: node.kubernetes.io/memory-pressure operator: Exists volumes: - name: kube-api-access-stbm5 projected: defaultMode: 420 sources: - serviceAccountToken: expirationSeconds: 3607 path: token - configMap: items: - key: ca.crt path: ca.crt name: kube-root-ca.crt - downwardAPI: items: - fieldRef: apiVersion: v1 fieldPath: metadata.namespace path: namespace - configMap: items: - key: service-ca.crt path: service-ca.crt name: openshift-service-ca.crt status: conditions: - lastProbeTime: null lastTransitionTime: '2025-12-13T06:59:17Z' status: 'True' type: PodReadyToStartContainers - lastProbeTime: null lastTransitionTime: '2025-12-13T06:59:15Z' status: 'True' type: Initialized - lastProbeTime: null lastTransitionTime: '2025-12-13T06:59:56Z' status: 'True' type: Ready - lastProbeTime: null lastTransitionTime: '2025-12-13T06:59:56Z' status: 'True' type: ContainersReady - lastProbeTime: null lastTransitionTime: '2025-12-13T06:59:15Z' status: 'True' type: PodScheduled containerStatuses: - containerID: cri-o://c42c232baf00da346bbee8e97f4a5367891b006ed6debee20446d6e50321c3d7 image: quay.io/openstack-k8s-operators/watcher-operator@sha256:961417d59f527d925ac48ff6a11de747d0493315e496e34dc83d76a1a1fff58a imageID: quay.io/openstack-k8s-operators/watcher-operator@sha256:961417d59f527d925ac48ff6a11de747d0493315e496e34dc83d76a1a1fff58a lastState: {} name: manager ready: true restartCount: 0 started: true state: running: startedAt: '2025-12-13T06:59:42Z' volumeMounts: - mountPath: /var/run/secrets/kubernetes.io/serviceaccount name: kube-api-access-stbm5 readOnly: true recursiveReadOnly: Disabled hostIP: 192.168.126.11 hostIPs: - ip: 192.168.126.11 phase: Running podIP: 10.217.0.86 podIPs: - ip: 10.217.0.86 qosClass: Burstable startTime: '2025-12-13T06:59:15Z' 2025-12-13 07:35:33,376 p=38280 u=zuul n=ansible | NO MORE HOSTS LEFT ************************************************************* 2025-12-13 07:35:33,382 p=38280 u=zuul n=ansible | PLAY RECAP ********************************************************************* 2025-12-13 07:35:33,382 p=38280 u=zuul n=ansible | localhost : ok=32 changed=10 unreachable=0 failed=1 skipped=6 rescued=0 ignored=0 2025-12-13 07:35:33,382 p=38280 u=zuul n=ansible | Saturday 13 December 2025 07:35:33 +0000 (0:03:36.063) 0:04:07.792 ***** 2025-12-13 07:35:33,382 p=38280 u=zuul n=ansible | =============================================================================== 2025-12-13 07:35:33,382 p=38280 u=zuul n=ansible | test_operator : Wait until the test-operator-controller-manager is reloaded - 216.06s 2025-12-13 07:35:33,382 p=38280 u=zuul n=ansible | run_hook : Run hook without retry - 90 Create manila resources ---------- 6.88s 2025-12-13 07:35:33,382 p=38280 u=zuul n=ansible | os_net_setup : Create subnet pools -------------------------------------- 5.50s 2025-12-13 07:35:33,382 p=38280 u=zuul n=ansible | os_net_setup : Delete existing subnet pools ----------------------------- 4.02s 2025-12-13 07:35:33,383 p=38280 u=zuul n=ansible | os_net_setup : Delete existing subnets ---------------------------------- 3.04s 2025-12-13 07:35:33,383 p=38280 u=zuul n=ansible | os_net_setup : Create subnets ------------------------------------------- 2.79s 2025-12-13 07:35:33,383 p=38280 u=zuul n=ansible | os_net_setup : Create networks ------------------------------------------ 2.50s 2025-12-13 07:35:33,383 p=38280 u=zuul n=ansible | os_net_setup : Delete existing networks --------------------------------- 2.03s 2025-12-13 07:35:33,383 p=38280 u=zuul n=ansible | test_operator : Patch test-operator version in CSV ---------------------- 0.82s 2025-12-13 07:35:33,383 p=38280 u=zuul n=ansible | test_operator : Get openstack-operator csv information ------------------ 0.77s 2025-12-13 07:35:33,383 p=38280 u=zuul n=ansible | test_operator : Get test-operator-controller-manager pod information ---- 0.61s 2025-12-13 07:35:33,383 p=38280 u=zuul n=ansible | test_operator : Get index of test-operator image ------------------------ 0.31s 2025-12-13 07:35:33,383 p=38280 u=zuul n=ansible | run_hook : Get file stat ------------------------------------------------ 0.23s 2025-12-13 07:35:33,383 p=38280 u=zuul n=ansible | run_hook : Ensure log directory exists ---------------------------------- 0.23s 2025-12-13 07:35:33,383 p=38280 u=zuul n=ansible | run_hook : Get parameters files ----------------------------------------- 0.23s 2025-12-13 07:35:33,383 p=38280 u=zuul n=ansible | run_hook : Check if we have a file -------------------------------------- 0.17s 2025-12-13 07:35:33,383 p=38280 u=zuul n=ansible | test_operator : Ensure test_operator folder exists ---------------------- 0.16s 2025-12-13 07:35:33,383 p=38280 u=zuul n=ansible | run_hook : Ensure artifacts directory exists ---------------------------- 0.16s 2025-12-13 07:35:33,383 p=38280 u=zuul n=ansible | run_hook : Loop on hooks for pre_tests ---------------------------------- 0.14s 2025-12-13 07:35:33,383 p=38280 u=zuul n=ansible | run_hook : Assert single hooks are all mappings ------------------------- 0.09s 2025-12-13 07:35:33,383 p=38280 u=zuul n=ansible | Saturday 13 December 2025 07:35:33 +0000 (0:03:36.064) 0:04:07.792 ***** 2025-12-13 07:35:33,383 p=38280 u=zuul n=ansible | =============================================================================== 2025-12-13 07:35:33,383 p=38280 u=zuul n=ansible | test_operator --------------------------------------------------------- 218.85s 2025-12-13 07:35:33,383 p=38280 u=zuul n=ansible | os_net_setup ----------------------------------------------------------- 20.11s 2025-12-13 07:35:33,383 p=38280 u=zuul n=ansible | run_hook ---------------------------------------------------------------- 8.75s 2025-12-13 07:35:33,383 p=38280 u=zuul n=ansible | cifmw_setup ------------------------------------------------------------- 0.05s 2025-12-13 07:35:33,383 p=38280 u=zuul n=ansible | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2025-12-13 07:35:33,383 p=38280 u=zuul n=ansible | total ----------------------------------------------------------------- 247.76s