2026-01-22 06:44:11,358 p=30632 u=zuul n=ansible | Starting galaxy collection install process 2026-01-22 06:44:11,359 p=30632 u=zuul n=ansible | Process install dependency map 2026-01-22 06:44:27,124 p=30632 u=zuul n=ansible | Starting collection install process 2026-01-22 06:44:27,125 p=30632 u=zuul n=ansible | Installing 'cifmw.general:1.0.0+daa79182' to '/home/zuul/.ansible/collections/ansible_collections/cifmw/general' 2026-01-22 06:44:27,661 p=30632 u=zuul n=ansible | Created collection for cifmw.general:1.0.0+daa79182 at /home/zuul/.ansible/collections/ansible_collections/cifmw/general 2026-01-22 06:44:27,661 p=30632 u=zuul n=ansible | cifmw.general:1.0.0+daa79182 was installed successfully 2026-01-22 06:44:27,661 p=30632 u=zuul n=ansible | Installing 'containers.podman:1.16.2' to '/home/zuul/.ansible/collections/ansible_collections/containers/podman' 2026-01-22 06:44:27,718 p=30632 u=zuul n=ansible | Created collection for containers.podman:1.16.2 at /home/zuul/.ansible/collections/ansible_collections/containers/podman 2026-01-22 06:44:27,718 p=30632 u=zuul n=ansible | containers.podman:1.16.2 was installed successfully 2026-01-22 06:44:27,718 p=30632 u=zuul n=ansible | Installing 'community.general:10.0.1' to '/home/zuul/.ansible/collections/ansible_collections/community/general' 2026-01-22 06:44:28,447 p=30632 u=zuul n=ansible | Created collection for community.general:10.0.1 at /home/zuul/.ansible/collections/ansible_collections/community/general 2026-01-22 06:44:28,448 p=30632 u=zuul n=ansible | community.general:10.0.1 was installed successfully 2026-01-22 06:44:28,448 p=30632 u=zuul n=ansible | Installing 'ansible.posix:1.6.2' to '/home/zuul/.ansible/collections/ansible_collections/ansible/posix' 2026-01-22 06:44:28,496 p=30632 u=zuul n=ansible | Created collection for ansible.posix:1.6.2 at /home/zuul/.ansible/collections/ansible_collections/ansible/posix 2026-01-22 06:44:28,496 p=30632 u=zuul n=ansible | ansible.posix:1.6.2 was installed successfully 2026-01-22 06:44:28,496 p=30632 u=zuul n=ansible | Installing 'ansible.utils:5.1.2' to '/home/zuul/.ansible/collections/ansible_collections/ansible/utils' 2026-01-22 06:44:28,594 p=30632 u=zuul n=ansible | Created collection for ansible.utils:5.1.2 at /home/zuul/.ansible/collections/ansible_collections/ansible/utils 2026-01-22 06:44:28,594 p=30632 u=zuul n=ansible | ansible.utils:5.1.2 was installed successfully 2026-01-22 06:44:28,594 p=30632 u=zuul n=ansible | Installing 'community.libvirt:1.3.0' to '/home/zuul/.ansible/collections/ansible_collections/community/libvirt' 2026-01-22 06:44:28,617 p=30632 u=zuul n=ansible | Created collection for community.libvirt:1.3.0 at /home/zuul/.ansible/collections/ansible_collections/community/libvirt 2026-01-22 06:44:28,617 p=30632 u=zuul n=ansible | community.libvirt:1.3.0 was installed successfully 2026-01-22 06:44:28,617 p=30632 u=zuul n=ansible | Installing 'community.crypto:2.22.3' to '/home/zuul/.ansible/collections/ansible_collections/community/crypto' 2026-01-22 06:44:28,755 p=30632 u=zuul n=ansible | Created collection for community.crypto:2.22.3 at /home/zuul/.ansible/collections/ansible_collections/community/crypto 2026-01-22 06:44:28,755 p=30632 u=zuul n=ansible | community.crypto:2.22.3 was installed successfully 2026-01-22 06:44:28,755 p=30632 u=zuul n=ansible | Installing 'kubernetes.core:5.0.0' to '/home/zuul/.ansible/collections/ansible_collections/kubernetes/core' 2026-01-22 06:44:28,868 p=30632 u=zuul n=ansible | Created collection for kubernetes.core:5.0.0 at /home/zuul/.ansible/collections/ansible_collections/kubernetes/core 2026-01-22 06:44:28,868 p=30632 u=zuul n=ansible | kubernetes.core:5.0.0 was installed successfully 2026-01-22 06:44:28,868 p=30632 u=zuul n=ansible | Installing 'ansible.netcommon:7.1.0' to '/home/zuul/.ansible/collections/ansible_collections/ansible/netcommon' 2026-01-22 06:44:28,934 p=30632 u=zuul n=ansible | Created collection for ansible.netcommon:7.1.0 at /home/zuul/.ansible/collections/ansible_collections/ansible/netcommon 2026-01-22 06:44:28,934 p=30632 u=zuul n=ansible | ansible.netcommon:7.1.0 was installed successfully 2026-01-22 06:44:28,934 p=30632 u=zuul n=ansible | Installing 'openstack.config_template:2.1.1' to '/home/zuul/.ansible/collections/ansible_collections/openstack/config_template' 2026-01-22 06:44:28,952 p=30632 u=zuul n=ansible | Created collection for openstack.config_template:2.1.1 at /home/zuul/.ansible/collections/ansible_collections/openstack/config_template 2026-01-22 06:44:28,952 p=30632 u=zuul n=ansible | openstack.config_template:2.1.1 was installed successfully 2026-01-22 06:44:28,952 p=30632 u=zuul n=ansible | Installing 'junipernetworks.junos:9.1.0' to '/home/zuul/.ansible/collections/ansible_collections/junipernetworks/junos' 2026-01-22 06:44:29,190 p=30632 u=zuul n=ansible | Created collection for junipernetworks.junos:9.1.0 at /home/zuul/.ansible/collections/ansible_collections/junipernetworks/junos 2026-01-22 06:44:29,190 p=30632 u=zuul n=ansible | junipernetworks.junos:9.1.0 was installed successfully 2026-01-22 06:44:29,190 p=30632 u=zuul n=ansible | Installing 'cisco.ios:9.0.3' to '/home/zuul/.ansible/collections/ansible_collections/cisco/ios' 2026-01-22 06:44:29,469 p=30632 u=zuul n=ansible | Created collection for cisco.ios:9.0.3 at /home/zuul/.ansible/collections/ansible_collections/cisco/ios 2026-01-22 06:44:29,469 p=30632 u=zuul n=ansible | cisco.ios:9.0.3 was installed successfully 2026-01-22 06:44:29,469 p=30632 u=zuul n=ansible | Installing 'mellanox.onyx:1.0.0' to '/home/zuul/.ansible/collections/ansible_collections/mellanox/onyx' 2026-01-22 06:44:29,499 p=30632 u=zuul n=ansible | Created collection for mellanox.onyx:1.0.0 at /home/zuul/.ansible/collections/ansible_collections/mellanox/onyx 2026-01-22 06:44:29,499 p=30632 u=zuul n=ansible | mellanox.onyx:1.0.0 was installed successfully 2026-01-22 06:44:29,499 p=30632 u=zuul n=ansible | Installing 'community.okd:4.0.0' to '/home/zuul/.ansible/collections/ansible_collections/community/okd' 2026-01-22 06:44:29,525 p=30632 u=zuul n=ansible | Created collection for community.okd:4.0.0 at /home/zuul/.ansible/collections/ansible_collections/community/okd 2026-01-22 06:44:29,525 p=30632 u=zuul n=ansible | community.okd:4.0.0 was installed successfully 2026-01-22 06:44:29,525 p=30632 u=zuul n=ansible | Installing '@NAMESPACE@.@NAME@:3.1.4' to '/home/zuul/.ansible/collections/ansible_collections/@NAMESPACE@/@NAME@' 2026-01-22 06:44:29,606 p=30632 u=zuul n=ansible | Created collection for @NAMESPACE@.@NAME@:3.1.4 at /home/zuul/.ansible/collections/ansible_collections/@NAMESPACE@/@NAME@ 2026-01-22 06:44:29,606 p=30632 u=zuul n=ansible | @NAMESPACE@.@NAME@:3.1.4 was installed successfully 2026-01-22 06:44:38,465 p=31191 u=zuul n=ansible | PLAY [Bootstrap playbook] ****************************************************** 2026-01-22 06:44:38,500 p=31191 u=zuul n=ansible | TASK [Gathering Facts ] ******************************************************** 2026-01-22 06:44:38,500 p=31191 u=zuul n=ansible | Thursday 22 January 2026 06:44:38 +0000 (0:00:00.052) 0:00:00.052 ****** 2026-01-22 06:44:38,500 p=31191 u=zuul n=ansible | Thursday 22 January 2026 06:44:38 +0000 (0:00:00.050) 0:00:00.050 ****** 2026-01-22 06:44:39,798 p=31191 u=zuul n=ansible | ok: [controller] 2026-01-22 06:44:39,815 p=31191 u=zuul n=ansible | TASK [cifmw_setup : Set custom cifmw PATH reusable fact cifmw_path={{ ansible_user_dir }}/.crc/bin:{{ ansible_user_dir }}/.crc/bin/oc:{{ ansible_user_dir }}/bin:{{ ansible_env.PATH }}, cacheable=True] *** 2026-01-22 06:44:39,815 p=31191 u=zuul n=ansible | Thursday 22 January 2026 06:44:39 +0000 (0:00:01.315) 0:00:01.367 ****** 2026-01-22 06:44:39,815 p=31191 u=zuul n=ansible | Thursday 22 January 2026 06:44:39 +0000 (0:00:01.315) 0:00:01.366 ****** 2026-01-22 06:44:39,838 p=31191 u=zuul n=ansible | ok: [controller] 2026-01-22 06:44:39,846 p=31191 u=zuul n=ansible | TASK [cifmw_setup : Get customized parameters ci_framework_params={{ hostvars[inventory_hostname] | dict2items | selectattr("key", "match", "^(cifmw|pre|post)_(?!install_yamls|openshift_token|openshift_login|openshift_kubeconfig).*") | list | items2dict }}] *** 2026-01-22 06:44:39,846 p=31191 u=zuul n=ansible | Thursday 22 January 2026 06:44:39 +0000 (0:00:00.030) 0:00:01.398 ****** 2026-01-22 06:44:39,846 p=31191 u=zuul n=ansible | Thursday 22 January 2026 06:44:39 +0000 (0:00:00.030) 0:00:01.396 ****** 2026-01-22 06:44:39,909 p=31191 u=zuul n=ansible | ok: [controller] 2026-01-22 06:44:39,916 p=31191 u=zuul n=ansible | TASK [install_ca : Ensure target directory exists path={{ cifmw_install_ca_trust_dir }}, state=directory, mode=0755] *** 2026-01-22 06:44:39,916 p=31191 u=zuul n=ansible | Thursday 22 January 2026 06:44:39 +0000 (0:00:00.070) 0:00:01.468 ****** 2026-01-22 06:44:39,917 p=31191 u=zuul n=ansible | Thursday 22 January 2026 06:44:39 +0000 (0:00:00.070) 0:00:01.467 ****** 2026-01-22 06:44:40,266 p=31191 u=zuul n=ansible | ok: [controller] 2026-01-22 06:44:40,273 p=31191 u=zuul n=ansible | TASK [install_ca : Install internal CA from url url={{ cifmw_install_ca_url }}, dest={{ cifmw_install_ca_trust_dir }}, validate_certs={{ cifmw_install_ca_url_validate_certs | default(omit) }}, mode=0644] *** 2026-01-22 06:44:40,273 p=31191 u=zuul n=ansible | Thursday 22 January 2026 06:44:40 +0000 (0:00:00.356) 0:00:01.824 ****** 2026-01-22 06:44:40,273 p=31191 u=zuul n=ansible | Thursday 22 January 2026 06:44:40 +0000 (0:00:00.356) 0:00:01.823 ****** 2026-01-22 06:44:40,293 p=31191 u=zuul n=ansible | skipping: [controller] 2026-01-22 06:44:40,300 p=31191 u=zuul n=ansible | TASK [install_ca : Install custom CA bundle from inline dest={{ cifmw_install_ca_trust_dir }}/cifmw_inline_ca_bundle.crt, content={{ cifmw_install_ca_bundle_inline }}, mode=0644] *** 2026-01-22 06:44:40,300 p=31191 u=zuul n=ansible | Thursday 22 January 2026 06:44:40 +0000 (0:00:00.027) 0:00:01.852 ****** 2026-01-22 06:44:40,300 p=31191 u=zuul n=ansible | Thursday 22 January 2026 06:44:40 +0000 (0:00:00.027) 0:00:01.850 ****** 2026-01-22 06:44:40,322 p=31191 u=zuul n=ansible | skipping: [controller] 2026-01-22 06:44:40,328 p=31191 u=zuul n=ansible | TASK [install_ca : Install custom CA bundle from file dest={{ cifmw_install_ca_trust_dir }}/{{ cifmw_install_ca_bundle_src | basename }}, src={{ cifmw_install_ca_bundle_src }}, mode=0644] *** 2026-01-22 06:44:40,328 p=31191 u=zuul n=ansible | Thursday 22 January 2026 06:44:40 +0000 (0:00:00.027) 0:00:01.880 ****** 2026-01-22 06:44:40,328 p=31191 u=zuul n=ansible | Thursday 22 January 2026 06:44:40 +0000 (0:00:00.027) 0:00:01.878 ****** 2026-01-22 06:44:40,349 p=31191 u=zuul n=ansible | skipping: [controller] 2026-01-22 06:44:40,355 p=31191 u=zuul n=ansible | TASK [install_ca : Update ca bundle _raw_params=update-ca-trust] *************** 2026-01-22 06:44:40,355 p=31191 u=zuul n=ansible | Thursday 22 January 2026 06:44:40 +0000 (0:00:00.027) 0:00:01.907 ****** 2026-01-22 06:44:40,355 p=31191 u=zuul n=ansible | Thursday 22 January 2026 06:44:40 +0000 (0:00:00.027) 0:00:01.906 ****** 2026-01-22 06:44:41,821 p=31191 u=zuul n=ansible | changed: [controller] 2026-01-22 06:44:41,844 p=31191 u=zuul n=ansible | TASK [repo_setup : Ensure directories are present path={{ cifmw_repo_setup_basedir }}/{{ item }}, state=directory, mode=0755] *** 2026-01-22 06:44:41,845 p=31191 u=zuul n=ansible | Thursday 22 January 2026 06:44:41 +0000 (0:00:01.489) 0:00:03.397 ****** 2026-01-22 06:44:41,845 p=31191 u=zuul n=ansible | Thursday 22 January 2026 06:44:41 +0000 (0:00:01.489) 0:00:03.395 ****** 2026-01-22 06:44:42,102 p=31191 u=zuul n=ansible | changed: [controller] => (item=tmp) 2026-01-22 06:44:42,315 p=31191 u=zuul n=ansible | changed: [controller] => (item=artifacts/repositories) 2026-01-22 06:44:42,534 p=31191 u=zuul n=ansible | changed: [controller] => (item=venv/repo_setup) 2026-01-22 06:44:42,542 p=31191 u=zuul n=ansible | TASK [repo_setup : Make sure git-core package is installed name=git-core, state=present] *** 2026-01-22 06:44:42,542 p=31191 u=zuul n=ansible | Thursday 22 January 2026 06:44:42 +0000 (0:00:00.697) 0:00:04.094 ****** 2026-01-22 06:44:42,542 p=31191 u=zuul n=ansible | Thursday 22 January 2026 06:44:42 +0000 (0:00:00.697) 0:00:04.093 ****** 2026-01-22 06:44:43,548 p=31191 u=zuul n=ansible | ok: [controller] 2026-01-22 06:44:43,558 p=31191 u=zuul n=ansible | TASK [repo_setup : Get repo-setup repository accept_hostkey=True, dest={{ cifmw_repo_setup_basedir }}/tmp/repo-setup, repo={{ cifmw_repo_setup_src }}] *** 2026-01-22 06:44:43,558 p=31191 u=zuul n=ansible | Thursday 22 January 2026 06:44:43 +0000 (0:00:01.015) 0:00:05.109 ****** 2026-01-22 06:44:43,558 p=31191 u=zuul n=ansible | Thursday 22 January 2026 06:44:43 +0000 (0:00:01.015) 0:00:05.108 ****** 2026-01-22 06:44:44,604 p=31191 u=zuul n=ansible | changed: [controller] 2026-01-22 06:44:44,613 p=31191 u=zuul n=ansible | TASK [repo_setup : Initialize python venv and install requirements virtualenv={{ cifmw_repo_setup_venv }}, requirements={{ cifmw_repo_setup_basedir }}/tmp/repo-setup/requirements.txt, virtualenv_command=python3 -m venv --system-site-packages --upgrade-deps] *** 2026-01-22 06:44:44,613 p=31191 u=zuul n=ansible | Thursday 22 January 2026 06:44:44 +0000 (0:00:01.055) 0:00:06.165 ****** 2026-01-22 06:44:44,613 p=31191 u=zuul n=ansible | Thursday 22 January 2026 06:44:44 +0000 (0:00:01.055) 0:00:06.163 ****** 2026-01-22 06:44:53,292 p=31191 u=zuul n=ansible | changed: [controller] 2026-01-22 06:44:53,306 p=31191 u=zuul n=ansible | TASK [repo_setup : Install repo-setup package chdir={{ cifmw_repo_setup_basedir }}/tmp/repo-setup, creates={{ cifmw_repo_setup_venv }}/bin/repo-setup, _raw_params={{ cifmw_repo_setup_venv }}/bin/python setup.py install] *** 2026-01-22 06:44:53,306 p=31191 u=zuul n=ansible | Thursday 22 January 2026 06:44:53 +0000 (0:00:08.692) 0:00:14.858 ****** 2026-01-22 06:44:53,306 p=31191 u=zuul n=ansible | Thursday 22 January 2026 06:44:53 +0000 (0:00:08.692) 0:00:14.857 ****** 2026-01-22 06:44:54,220 p=31191 u=zuul n=ansible | changed: [controller] 2026-01-22 06:44:54,226 p=31191 u=zuul n=ansible | TASK [repo_setup : Set cifmw_repo_setup_dlrn_hash_tag from content provider cifmw_repo_setup_dlrn_hash_tag={{ content_provider_dlrn_md5_hash }}] *** 2026-01-22 06:44:54,226 p=31191 u=zuul n=ansible | Thursday 22 January 2026 06:44:54 +0000 (0:00:00.920) 0:00:15.778 ****** 2026-01-22 06:44:54,227 p=31191 u=zuul n=ansible | Thursday 22 January 2026 06:44:54 +0000 (0:00:00.920) 0:00:15.777 ****** 2026-01-22 06:44:54,260 p=31191 u=zuul n=ansible | skipping: [controller] 2026-01-22 06:44:54,267 p=31191 u=zuul n=ansible | TASK [repo_setup : Run repo-setup _raw_params={{ cifmw_repo_setup_venv }}/bin/repo-setup {{ cifmw_repo_setup_promotion }} {{ cifmw_repo_setup_additional_repos }} -d {{ cifmw_repo_setup_os_release }}{{ cifmw_repo_setup_dist_major_version }} -b {{ cifmw_repo_setup_branch }} --rdo-mirror {{ cifmw_repo_setup_rdo_mirror }} {% if cifmw_repo_setup_dlrn_hash_tag | length > 0 %} --dlrn-hash-tag {{ cifmw_repo_setup_dlrn_hash_tag }} {% endif %} -o {{ cifmw_repo_setup_output }}] *** 2026-01-22 06:44:54,267 p=31191 u=zuul n=ansible | Thursday 22 January 2026 06:44:54 +0000 (0:00:00.040) 0:00:15.819 ****** 2026-01-22 06:44:54,267 p=31191 u=zuul n=ansible | Thursday 22 January 2026 06:44:54 +0000 (0:00:00.040) 0:00:15.817 ****** 2026-01-22 06:44:54,943 p=31191 u=zuul n=ansible | changed: [controller] 2026-01-22 06:44:54,955 p=31191 u=zuul n=ansible | TASK [repo_setup : Get component repo url={{ cifmw_repo_setup_dlrn_uri }}/{{ cifmw_repo_setup_os_release }}{{ cifmw_repo_setup_dist_major_version }}-{{ cifmw_repo_setup_branch }}/component/{{ cifmw_repo_setup_component_name }}/{{ cifmw_repo_setup_component_promotion_tag }}/delorean.repo, dest={{ cifmw_repo_setup_output }}/{{ cifmw_repo_setup_component_name }}_{{ cifmw_repo_setup_component_promotion_tag }}_delorean.repo, mode=0644] *** 2026-01-22 06:44:54,956 p=31191 u=zuul n=ansible | Thursday 22 January 2026 06:44:54 +0000 (0:00:00.688) 0:00:16.507 ****** 2026-01-22 06:44:54,956 p=31191 u=zuul n=ansible | Thursday 22 January 2026 06:44:54 +0000 (0:00:00.688) 0:00:16.506 ****** 2026-01-22 06:44:54,992 p=31191 u=zuul n=ansible | skipping: [controller] 2026-01-22 06:44:55,005 p=31191 u=zuul n=ansible | TASK [repo_setup : Rename component repo path={{ cifmw_repo_setup_output }}/{{ cifmw_repo_setup_component_name }}_{{ cifmw_repo_setup_component_promotion_tag }}_delorean.repo, regexp=delorean-component-{{ cifmw_repo_setup_component_name }}, replace={{ cifmw_repo_setup_component_name }}-{{ cifmw_repo_setup_component_promotion_tag }}] *** 2026-01-22 06:44:55,005 p=31191 u=zuul n=ansible | Thursday 22 January 2026 06:44:55 +0000 (0:00:00.049) 0:00:16.557 ****** 2026-01-22 06:44:55,005 p=31191 u=zuul n=ansible | Thursday 22 January 2026 06:44:55 +0000 (0:00:00.049) 0:00:16.556 ****** 2026-01-22 06:44:55,043 p=31191 u=zuul n=ansible | skipping: [controller] 2026-01-22 06:44:55,053 p=31191 u=zuul n=ansible | TASK [repo_setup : Disable component repo in current-podified dlrn repo path={{ cifmw_repo_setup_output }}/delorean.repo, section=delorean-component-{{ cifmw_repo_setup_component_name }}, option=enabled, value=0, mode=0644] *** 2026-01-22 06:44:55,053 p=31191 u=zuul n=ansible | Thursday 22 January 2026 06:44:55 +0000 (0:00:00.048) 0:00:16.605 ****** 2026-01-22 06:44:55,053 p=31191 u=zuul n=ansible | Thursday 22 January 2026 06:44:55 +0000 (0:00:00.048) 0:00:16.604 ****** 2026-01-22 06:44:55,092 p=31191 u=zuul n=ansible | skipping: [controller] 2026-01-22 06:44:55,102 p=31191 u=zuul n=ansible | TASK [repo_setup : Run repo-setup-get-hash _raw_params={{ cifmw_repo_setup_venv }}/bin/repo-setup-get-hash --dlrn-url {{ cifmw_repo_setup_dlrn_uri[:-1] }} --os-version {{ cifmw_repo_setup_os_release }}{{ cifmw_repo_setup_dist_major_version }} --release {{ cifmw_repo_setup_branch }} {% if cifmw_repo_setup_component_name | length > 0 -%} --component {{ cifmw_repo_setup_component_name }} --tag {{ cifmw_repo_setup_component_promotion_tag }} {% else -%} --tag {{cifmw_repo_setup_promotion }} {% endif -%} {% if (cifmw_repo_setup_dlrn_hash_tag | length > 0) and (cifmw_repo_setup_component_name | length <= 0) -%} --dlrn-hash-tag {{ cifmw_repo_setup_dlrn_hash_tag }} {% endif -%} --json] *** 2026-01-22 06:44:55,102 p=31191 u=zuul n=ansible | Thursday 22 January 2026 06:44:55 +0000 (0:00:00.048) 0:00:16.654 ****** 2026-01-22 06:44:55,102 p=31191 u=zuul n=ansible | Thursday 22 January 2026 06:44:55 +0000 (0:00:00.048) 0:00:16.652 ****** 2026-01-22 06:44:55,633 p=31191 u=zuul n=ansible | changed: [controller] 2026-01-22 06:44:55,646 p=31191 u=zuul n=ansible | TASK [repo_setup : Dump full hash in delorean.repo.md5 file content={{ _repo_setup_json['full_hash'] }} , dest={{ cifmw_repo_setup_basedir }}/artifacts/repositories/delorean.repo.md5, mode=0644] *** 2026-01-22 06:44:55,646 p=31191 u=zuul n=ansible | Thursday 22 January 2026 06:44:55 +0000 (0:00:00.544) 0:00:17.198 ****** 2026-01-22 06:44:55,647 p=31191 u=zuul n=ansible | Thursday 22 January 2026 06:44:55 +0000 (0:00:00.544) 0:00:17.197 ****** 2026-01-22 06:44:56,679 p=31191 u=zuul n=ansible | changed: [controller] 2026-01-22 06:44:56,690 p=31191 u=zuul n=ansible | TASK [repo_setup : Dump current-podified hash url={{ cifmw_repo_setup_dlrn_uri }}/{{ cifmw_repo_setup_os_release }}{{ cifmw_repo_setup_dist_major_version }}-{{ cifmw_repo_setup_branch }}/current-podified/delorean.repo.md5, dest={{ cifmw_repo_setup_basedir }}/artifacts/repositories/delorean.repo.md5, mode=0644] *** 2026-01-22 06:44:56,690 p=31191 u=zuul n=ansible | Thursday 22 January 2026 06:44:56 +0000 (0:00:01.043) 0:00:18.242 ****** 2026-01-22 06:44:56,690 p=31191 u=zuul n=ansible | Thursday 22 January 2026 06:44:56 +0000 (0:00:01.043) 0:00:18.240 ****** 2026-01-22 06:44:56,710 p=31191 u=zuul n=ansible | skipping: [controller] 2026-01-22 06:44:56,718 p=31191 u=zuul n=ansible | TASK [repo_setup : Slurp current podified hash src={{ cifmw_repo_setup_basedir }}/artifacts/repositories/delorean.repo.md5] *** 2026-01-22 06:44:56,718 p=31191 u=zuul n=ansible | Thursday 22 January 2026 06:44:56 +0000 (0:00:00.028) 0:00:18.270 ****** 2026-01-22 06:44:56,718 p=31191 u=zuul n=ansible | Thursday 22 January 2026 06:44:56 +0000 (0:00:00.028) 0:00:18.269 ****** 2026-01-22 06:44:56,738 p=31191 u=zuul n=ansible | skipping: [controller] 2026-01-22 06:44:56,748 p=31191 u=zuul n=ansible | TASK [repo_setup : Update the value of full_hash _repo_setup_json={{ _repo_setup_json | combine({'full_hash': _hash}, recursive=true) }}] *** 2026-01-22 06:44:56,748 p=31191 u=zuul n=ansible | Thursday 22 January 2026 06:44:56 +0000 (0:00:00.030) 0:00:18.300 ****** 2026-01-22 06:44:56,748 p=31191 u=zuul n=ansible | Thursday 22 January 2026 06:44:56 +0000 (0:00:00.030) 0:00:18.299 ****** 2026-01-22 06:44:56,769 p=31191 u=zuul n=ansible | skipping: [controller] 2026-01-22 06:44:56,779 p=31191 u=zuul n=ansible | TASK [repo_setup : Export hashes facts for further use cifmw_repo_setup_full_hash={{ _repo_setup_json['full_hash'] }}, cifmw_repo_setup_commit_hash={{ _repo_setup_json['commit_hash'] }}, cifmw_repo_setup_distro_hash={{ _repo_setup_json['distro_hash'] }}, cifmw_repo_setup_extended_hash={{ _repo_setup_json['extended_hash'] }}, cifmw_repo_setup_dlrn_api_url={{ _repo_setup_json['dlrn_api_url'] }}, cifmw_repo_setup_dlrn_url={{ _repo_setup_json['dlrn_url'] }}, cifmw_repo_setup_release={{ _repo_setup_json['release'] }}, cacheable=True] *** 2026-01-22 06:44:56,779 p=31191 u=zuul n=ansible | Thursday 22 January 2026 06:44:56 +0000 (0:00:00.030) 0:00:18.331 ****** 2026-01-22 06:44:56,779 p=31191 u=zuul n=ansible | Thursday 22 January 2026 06:44:56 +0000 (0:00:00.030) 0:00:18.329 ****** 2026-01-22 06:44:56,814 p=31191 u=zuul n=ansible | ok: [controller] 2026-01-22 06:44:56,822 p=31191 u=zuul n=ansible | TASK [repo_setup : Create download directory path={{ cifmw_repo_setup_rhos_release_path }}, state=directory, mode=0755] *** 2026-01-22 06:44:56,822 p=31191 u=zuul n=ansible | Thursday 22 January 2026 06:44:56 +0000 (0:00:00.043) 0:00:18.374 ****** 2026-01-22 06:44:56,822 p=31191 u=zuul n=ansible | Thursday 22 January 2026 06:44:56 +0000 (0:00:00.043) 0:00:18.373 ****** 2026-01-22 06:44:56,840 p=31191 u=zuul n=ansible | skipping: [controller] 2026-01-22 06:44:56,849 p=31191 u=zuul n=ansible | TASK [repo_setup : Print the URL to request msg={{ cifmw_repo_setup_rhos_release_rpm }}] *** 2026-01-22 06:44:56,849 p=31191 u=zuul n=ansible | Thursday 22 January 2026 06:44:56 +0000 (0:00:00.026) 0:00:18.401 ****** 2026-01-22 06:44:56,849 p=31191 u=zuul n=ansible | Thursday 22 January 2026 06:44:56 +0000 (0:00:00.026) 0:00:18.400 ****** 2026-01-22 06:44:56,866 p=31191 u=zuul n=ansible | skipping: [controller] 2026-01-22 06:44:56,875 p=31191 u=zuul n=ansible | TASK [Download the RPM name=krb_request] *************************************** 2026-01-22 06:44:56,875 p=31191 u=zuul n=ansible | Thursday 22 January 2026 06:44:56 +0000 (0:00:00.025) 0:00:18.427 ****** 2026-01-22 06:44:56,875 p=31191 u=zuul n=ansible | Thursday 22 January 2026 06:44:56 +0000 (0:00:00.025) 0:00:18.426 ****** 2026-01-22 06:44:56,890 p=31191 u=zuul n=ansible | skipping: [controller] 2026-01-22 06:44:56,900 p=31191 u=zuul n=ansible | TASK [repo_setup : Install RHOS Release tool name={{ cifmw_repo_setup_rhos_release_rpm if cifmw_repo_setup_rhos_release_rpm is not url else cifmw_krb_request_out.path }}, state=present, disable_gpg_check={{ cifmw_repo_setup_rhos_release_gpg_check | bool }}] *** 2026-01-22 06:44:56,900 p=31191 u=zuul n=ansible | Thursday 22 January 2026 06:44:56 +0000 (0:00:00.024) 0:00:18.452 ****** 2026-01-22 06:44:56,900 p=31191 u=zuul n=ansible | Thursday 22 January 2026 06:44:56 +0000 (0:00:00.024) 0:00:18.450 ****** 2026-01-22 06:44:56,914 p=31191 u=zuul n=ansible | skipping: [controller] 2026-01-22 06:44:56,922 p=31191 u=zuul n=ansible | TASK [repo_setup : Get rhos-release tool version _raw_params=rhos-release --version] *** 2026-01-22 06:44:56,922 p=31191 u=zuul n=ansible | Thursday 22 January 2026 06:44:56 +0000 (0:00:00.022) 0:00:18.474 ****** 2026-01-22 06:44:56,923 p=31191 u=zuul n=ansible | Thursday 22 January 2026 06:44:56 +0000 (0:00:00.022) 0:00:18.473 ****** 2026-01-22 06:44:56,937 p=31191 u=zuul n=ansible | skipping: [controller] 2026-01-22 06:44:56,945 p=31191 u=zuul n=ansible | TASK [repo_setup : Print rhos-release tool version msg={{ rr_version.stdout }}] *** 2026-01-22 06:44:56,945 p=31191 u=zuul n=ansible | Thursday 22 January 2026 06:44:56 +0000 (0:00:00.022) 0:00:18.497 ****** 2026-01-22 06:44:56,945 p=31191 u=zuul n=ansible | Thursday 22 January 2026 06:44:56 +0000 (0:00:00.022) 0:00:18.496 ****** 2026-01-22 06:44:56,959 p=31191 u=zuul n=ansible | skipping: [controller] 2026-01-22 06:44:56,970 p=31191 u=zuul n=ansible | TASK [repo_setup : Generate repos using rhos-release {{ cifmw_repo_setup_rhos_release_args }} _raw_params=rhos-release {{ cifmw_repo_setup_rhos_release_args }} \ -t {{ cifmw_repo_setup_output }}] *** 2026-01-22 06:44:56,970 p=31191 u=zuul n=ansible | Thursday 22 January 2026 06:44:56 +0000 (0:00:00.024) 0:00:18.522 ****** 2026-01-22 06:44:56,970 p=31191 u=zuul n=ansible | Thursday 22 January 2026 06:44:56 +0000 (0:00:00.024) 0:00:18.521 ****** 2026-01-22 06:44:56,987 p=31191 u=zuul n=ansible | skipping: [controller] 2026-01-22 06:44:56,996 p=31191 u=zuul n=ansible | TASK [repo_setup : Check for /etc/ci/mirror_info.sh path=/etc/ci/mirror_info.sh] *** 2026-01-22 06:44:56,996 p=31191 u=zuul n=ansible | Thursday 22 January 2026 06:44:56 +0000 (0:00:00.025) 0:00:18.548 ****** 2026-01-22 06:44:56,996 p=31191 u=zuul n=ansible | Thursday 22 January 2026 06:44:56 +0000 (0:00:00.025) 0:00:18.547 ****** 2026-01-22 06:44:57,243 p=31191 u=zuul n=ansible | ok: [controller] 2026-01-22 06:44:57,249 p=31191 u=zuul n=ansible | TASK [repo_setup : Use RDO proxy mirrors chdir={{ cifmw_repo_setup_output }}, _raw_params=set -o pipefail source /etc/ci/mirror_info.sh sed -i -e "s|https://trunk.rdoproject.org|$NODEPOOL_RDO_PROXY|g" *.repo ] *** 2026-01-22 06:44:57,250 p=31191 u=zuul n=ansible | Thursday 22 January 2026 06:44:57 +0000 (0:00:00.253) 0:00:18.801 ****** 2026-01-22 06:44:57,250 p=31191 u=zuul n=ansible | Thursday 22 January 2026 06:44:57 +0000 (0:00:00.253) 0:00:18.800 ****** 2026-01-22 06:44:57,508 p=31191 u=zuul n=ansible | changed: [controller] 2026-01-22 06:44:57,515 p=31191 u=zuul n=ansible | TASK [repo_setup : Use RDO CentOS mirrors (remove CentOS 10 conditional when Nodepool mirrors exist) chdir={{ cifmw_repo_setup_output }}, _raw_params=set -o pipefail source /etc/ci/mirror_info.sh sed -i -e "s|http://mirror.stream.centos.org|$NODEPOOL_CENTOS_MIRROR|g" *.repo ] *** 2026-01-22 06:44:57,515 p=31191 u=zuul n=ansible | Thursday 22 January 2026 06:44:57 +0000 (0:00:00.265) 0:00:19.067 ****** 2026-01-22 06:44:57,515 p=31191 u=zuul n=ansible | Thursday 22 January 2026 06:44:57 +0000 (0:00:00.265) 0:00:19.065 ****** 2026-01-22 06:44:57,796 p=31191 u=zuul n=ansible | changed: [controller] 2026-01-22 06:44:57,811 p=31191 u=zuul n=ansible | TASK [repo_setup : Check for gating.repo file on content provider url=http://{{ content_provider_registry_ip }}:8766/gating.repo] *** 2026-01-22 06:44:57,811 p=31191 u=zuul n=ansible | Thursday 22 January 2026 06:44:57 +0000 (0:00:00.296) 0:00:19.363 ****** 2026-01-22 06:44:57,811 p=31191 u=zuul n=ansible | Thursday 22 January 2026 06:44:57 +0000 (0:00:00.296) 0:00:19.362 ****** 2026-01-22 06:44:58,533 p=31191 u=zuul n=ansible | fatal: [controller]: FAILED! => changed: false elapsed: 0 msg: 'Status code was -1 and not [200]: Request failed: ' redirected: false status: -1 url: http://38.102.83.50:8766/gating.repo 2026-01-22 06:44:58,534 p=31191 u=zuul n=ansible | ...ignoring 2026-01-22 06:44:58,540 p=31191 u=zuul n=ansible | TASK [repo_setup : Populate gating repo from content provider ip content=[gating-repo] baseurl=http://{{ content_provider_registry_ip }}:8766/ enabled=1 gpgcheck=0 priority=1 , dest={{ cifmw_repo_setup_output }}/gating.repo, mode=0644] *** 2026-01-22 06:44:58,540 p=31191 u=zuul n=ansible | Thursday 22 January 2026 06:44:58 +0000 (0:00:00.729) 0:00:20.092 ****** 2026-01-22 06:44:58,541 p=31191 u=zuul n=ansible | Thursday 22 January 2026 06:44:58 +0000 (0:00:00.729) 0:00:20.091 ****** 2026-01-22 06:44:58,583 p=31191 u=zuul n=ansible | skipping: [controller] 2026-01-22 06:44:58,592 p=31191 u=zuul n=ansible | TASK [repo_setup : Check for DLRN repo at the destination path={{ cifmw_repo_setup_output }}/delorean.repo] *** 2026-01-22 06:44:58,592 p=31191 u=zuul n=ansible | Thursday 22 January 2026 06:44:58 +0000 (0:00:00.052) 0:00:20.144 ****** 2026-01-22 06:44:58,593 p=31191 u=zuul n=ansible | Thursday 22 January 2026 06:44:58 +0000 (0:00:00.052) 0:00:20.143 ****** 2026-01-22 06:44:58,627 p=31191 u=zuul n=ansible | skipping: [controller] 2026-01-22 06:44:58,635 p=31191 u=zuul n=ansible | TASK [repo_setup : Lower the priority of DLRN repos to allow installation from gating repo path={{ cifmw_repo_setup_output }}/delorean.repo, regexp=priority=1, replace=priority=20] *** 2026-01-22 06:44:58,635 p=31191 u=zuul n=ansible | Thursday 22 January 2026 06:44:58 +0000 (0:00:00.043) 0:00:20.187 ****** 2026-01-22 06:44:58,636 p=31191 u=zuul n=ansible | Thursday 22 January 2026 06:44:58 +0000 (0:00:00.043) 0:00:20.186 ****** 2026-01-22 06:44:58,662 p=31191 u=zuul n=ansible | skipping: [controller] 2026-01-22 06:44:58,670 p=31191 u=zuul n=ansible | TASK [repo_setup : Check for DLRN component repo path={{ cifmw_repo_setup_output }}/{{ _comp_repo }}] *** 2026-01-22 06:44:58,670 p=31191 u=zuul n=ansible | Thursday 22 January 2026 06:44:58 +0000 (0:00:00.034) 0:00:20.222 ****** 2026-01-22 06:44:58,670 p=31191 u=zuul n=ansible | Thursday 22 January 2026 06:44:58 +0000 (0:00:00.034) 0:00:20.221 ****** 2026-01-22 06:44:58,697 p=31191 u=zuul n=ansible | skipping: [controller] 2026-01-22 06:44:58,708 p=31191 u=zuul n=ansible | TASK [repo_setup : Lower the priority of componennt repos to allow installation from gating repo path={{ cifmw_repo_setup_output }}//{{ _comp_repo }}, regexp=priority=1, replace=priority=2] *** 2026-01-22 06:44:58,708 p=31191 u=zuul n=ansible | Thursday 22 January 2026 06:44:58 +0000 (0:00:00.038) 0:00:20.260 ****** 2026-01-22 06:44:58,708 p=31191 u=zuul n=ansible | Thursday 22 January 2026 06:44:58 +0000 (0:00:00.038) 0:00:20.259 ****** 2026-01-22 06:44:58,734 p=31191 u=zuul n=ansible | skipping: [controller] 2026-01-22 06:44:58,743 p=31191 u=zuul n=ansible | TASK [repo_setup : Find existing repos from /etc/yum.repos.d directory paths=/etc/yum.repos.d/, patterns=*.repo, recurse=False] *** 2026-01-22 06:44:58,743 p=31191 u=zuul n=ansible | Thursday 22 January 2026 06:44:58 +0000 (0:00:00.034) 0:00:20.295 ****** 2026-01-22 06:44:58,743 p=31191 u=zuul n=ansible | Thursday 22 January 2026 06:44:58 +0000 (0:00:00.034) 0:00:20.293 ****** 2026-01-22 06:44:59,087 p=31191 u=zuul n=ansible | ok: [controller] 2026-01-22 06:44:59,096 p=31191 u=zuul n=ansible | TASK [repo_setup : Remove existing repos from /etc/yum.repos.d directory path={{ item }}, state=absent] *** 2026-01-22 06:44:59,096 p=31191 u=zuul n=ansible | Thursday 22 January 2026 06:44:59 +0000 (0:00:00.353) 0:00:20.648 ****** 2026-01-22 06:44:59,097 p=31191 u=zuul n=ansible | Thursday 22 January 2026 06:44:59 +0000 (0:00:00.353) 0:00:20.647 ****** 2026-01-22 06:44:59,343 p=31191 u=zuul n=ansible | changed: [controller] => (item=/etc/yum.repos.d/centos-addons.repo) 2026-01-22 06:44:59,608 p=31191 u=zuul n=ansible | changed: [controller] => (item=/etc/yum.repos.d/centos.repo) 2026-01-22 06:44:59,623 p=31191 u=zuul n=ansible | TASK [repo_setup : Cleanup existing metadata _raw_params=dnf clean metadata] *** 2026-01-22 06:44:59,624 p=31191 u=zuul n=ansible | Thursday 22 January 2026 06:44:59 +0000 (0:00:00.527) 0:00:21.175 ****** 2026-01-22 06:44:59,624 p=31191 u=zuul n=ansible | Thursday 22 January 2026 06:44:59 +0000 (0:00:00.527) 0:00:21.174 ****** 2026-01-22 06:45:00,111 p=31191 u=zuul n=ansible | changed: [controller] 2026-01-22 06:45:00,117 p=31191 u=zuul n=ansible | TASK [repo_setup : Copy generated repos to /etc/yum.repos.d directory mode=0755, remote_src=True, src={{ cifmw_repo_setup_output }}/, dest=/etc/yum.repos.d] *** 2026-01-22 06:45:00,117 p=31191 u=zuul n=ansible | Thursday 22 January 2026 06:45:00 +0000 (0:00:00.493) 0:00:21.669 ****** 2026-01-22 06:45:00,117 p=31191 u=zuul n=ansible | Thursday 22 January 2026 06:45:00 +0000 (0:00:00.493) 0:00:21.668 ****** 2026-01-22 06:45:00,542 p=31191 u=zuul n=ansible | changed: [controller] 2026-01-22 06:45:00,553 p=31191 u=zuul n=ansible | TASK [ci_setup : Gather variables for each operating system _raw_params={{ item }}] *** 2026-01-22 06:45:00,553 p=31191 u=zuul n=ansible | Thursday 22 January 2026 06:45:00 +0000 (0:00:00.436) 0:00:22.105 ****** 2026-01-22 06:45:00,553 p=31191 u=zuul n=ansible | Thursday 22 January 2026 06:45:00 +0000 (0:00:00.436) 0:00:22.104 ****** 2026-01-22 06:45:00,599 p=31191 u=zuul n=ansible | ok: [controller] => (item=/home/zuul/src/github.com/openstack-k8s-operators/ci-framework/roles/ci_setup/vars/redhat.yml) 2026-01-22 06:45:00,607 p=31191 u=zuul n=ansible | TASK [ci_setup : List packages to install var=cifmw_ci_setup_packages] ********* 2026-01-22 06:45:00,607 p=31191 u=zuul n=ansible | Thursday 22 January 2026 06:45:00 +0000 (0:00:00.053) 0:00:22.159 ****** 2026-01-22 06:45:00,607 p=31191 u=zuul n=ansible | Thursday 22 January 2026 06:45:00 +0000 (0:00:00.053) 0:00:22.157 ****** 2026-01-22 06:45:00,624 p=31191 u=zuul n=ansible | ok: [controller] => cifmw_ci_setup_packages: - bash-completion - ca-certificates - git-core - make - tar - tmux - python3-pip 2026-01-22 06:45:00,631 p=31191 u=zuul n=ansible | TASK [ci_setup : Install needed packages name={{ cifmw_ci_setup_packages }}, state=latest] *** 2026-01-22 06:45:00,631 p=31191 u=zuul n=ansible | Thursday 22 January 2026 06:45:00 +0000 (0:00:00.023) 0:00:22.183 ****** 2026-01-22 06:45:00,631 p=31191 u=zuul n=ansible | Thursday 22 January 2026 06:45:00 +0000 (0:00:00.023) 0:00:22.181 ****** 2026-01-22 06:45:29,548 p=31191 u=zuul n=ansible | changed: [controller] 2026-01-22 06:45:29,555 p=31191 u=zuul n=ansible | TASK [ci_setup : Gather version of openshift client _raw_params=oc version --client -o yaml] *** 2026-01-22 06:45:29,555 p=31191 u=zuul n=ansible | Thursday 22 January 2026 06:45:29 +0000 (0:00:28.924) 0:00:51.107 ****** 2026-01-22 06:45:29,555 p=31191 u=zuul n=ansible | Thursday 22 January 2026 06:45:29 +0000 (0:00:28.924) 0:00:51.106 ****** 2026-01-22 06:45:29,790 p=31191 u=zuul n=ansible | ok: [controller] 2026-01-22 06:45:29,804 p=31191 u=zuul n=ansible | TASK [ci_setup : Ensure openshift client install path is present path={{ cifmw_ci_setup_oc_install_path }}, state=directory, mode=0755] *** 2026-01-22 06:45:29,804 p=31191 u=zuul n=ansible | Thursday 22 January 2026 06:45:29 +0000 (0:00:00.248) 0:00:51.356 ****** 2026-01-22 06:45:29,804 p=31191 u=zuul n=ansible | Thursday 22 January 2026 06:45:29 +0000 (0:00:00.249) 0:00:51.355 ****** 2026-01-22 06:45:30,041 p=31191 u=zuul n=ansible | changed: [controller] 2026-01-22 06:45:30,048 p=31191 u=zuul n=ansible | TASK [ci_setup : Install openshift client src={{ cifmw_ci_setup_openshift_client_download_uri }}/{{ cifmw_ci_setup_openshift_client_version }}/openshift-client-linux.tar.gz, dest={{ cifmw_ci_setup_oc_install_path }}, remote_src=True, mode=0755, creates={{ cifmw_ci_setup_oc_install_path }}/oc] *** 2026-01-22 06:45:30,048 p=31191 u=zuul n=ansible | Thursday 22 January 2026 06:45:30 +0000 (0:00:00.244) 0:00:51.600 ****** 2026-01-22 06:45:30,048 p=31191 u=zuul n=ansible | Thursday 22 January 2026 06:45:30 +0000 (0:00:00.243) 0:00:51.599 ****** 2026-01-22 06:45:35,816 p=31191 u=zuul n=ansible | changed: [controller] 2026-01-22 06:45:35,822 p=31191 u=zuul n=ansible | TASK [ci_setup : Add the OC path to cifmw_path if needed cifmw_path={{ cifmw_ci_setup_oc_install_path }}:{{ ansible_env.PATH }}, cacheable=True] *** 2026-01-22 06:45:35,822 p=31191 u=zuul n=ansible | Thursday 22 January 2026 06:45:35 +0000 (0:00:05.774) 0:00:57.374 ****** 2026-01-22 06:45:35,822 p=31191 u=zuul n=ansible | Thursday 22 January 2026 06:45:35 +0000 (0:00:05.774) 0:00:57.373 ****** 2026-01-22 06:45:35,843 p=31191 u=zuul n=ansible | skipping: [controller] 2026-01-22 06:45:35,851 p=31191 u=zuul n=ansible | TASK [ci_setup : Create completion file] *************************************** 2026-01-22 06:45:35,851 p=31191 u=zuul n=ansible | Thursday 22 January 2026 06:45:35 +0000 (0:00:00.028) 0:00:57.403 ****** 2026-01-22 06:45:35,851 p=31191 u=zuul n=ansible | Thursday 22 January 2026 06:45:35 +0000 (0:00:00.028) 0:00:57.401 ****** 2026-01-22 06:45:36,215 p=31191 u=zuul n=ansible | changed: [controller] 2026-01-22 06:45:36,228 p=31191 u=zuul n=ansible | TASK [ci_setup : Source completion from within .bashrc create=True, mode=0644, path={{ ansible_user_dir }}/.bashrc, block=if [ -f ~/.oc_completion ]; then source ~/.oc_completion fi] *** 2026-01-22 06:45:36,229 p=31191 u=zuul n=ansible | Thursday 22 January 2026 06:45:36 +0000 (0:00:00.377) 0:00:57.780 ****** 2026-01-22 06:45:36,229 p=31191 u=zuul n=ansible | Thursday 22 January 2026 06:45:36 +0000 (0:00:00.377) 0:00:57.779 ****** 2026-01-22 06:45:36,581 p=31191 u=zuul n=ansible | changed: [controller] 2026-01-22 06:45:36,594 p=31191 u=zuul n=ansible | TASK [ci_setup : Check rhsm status _raw_params=subscription-manager status] **** 2026-01-22 06:45:36,594 p=31191 u=zuul n=ansible | Thursday 22 January 2026 06:45:36 +0000 (0:00:00.365) 0:00:58.146 ****** 2026-01-22 06:45:36,595 p=31191 u=zuul n=ansible | Thursday 22 January 2026 06:45:36 +0000 (0:00:00.365) 0:00:58.145 ****** 2026-01-22 06:45:36,610 p=31191 u=zuul n=ansible | skipping: [controller] 2026-01-22 06:45:36,624 p=31191 u=zuul n=ansible | TASK [ci_setup : Gather the repos to be enabled _repos={{ cifmw_ci_setup_rhel_rhsm_default_repos + (cifmw_ci_setup_rhel_rhsm_extra_repos | default([])) }}] *** 2026-01-22 06:45:36,624 p=31191 u=zuul n=ansible | Thursday 22 January 2026 06:45:36 +0000 (0:00:00.030) 0:00:58.176 ****** 2026-01-22 06:45:36,625 p=31191 u=zuul n=ansible | Thursday 22 January 2026 06:45:36 +0000 (0:00:00.030) 0:00:58.175 ****** 2026-01-22 06:45:36,645 p=31191 u=zuul n=ansible | skipping: [controller] 2026-01-22 06:45:36,652 p=31191 u=zuul n=ansible | TASK [ci_setup : Enabling the required repositories. name={{ item }}, state={{ rhsm_repo_state | default('enabled') }}] *** 2026-01-22 06:45:36,652 p=31191 u=zuul n=ansible | Thursday 22 January 2026 06:45:36 +0000 (0:00:00.027) 0:00:58.204 ****** 2026-01-22 06:45:36,652 p=31191 u=zuul n=ansible | Thursday 22 January 2026 06:45:36 +0000 (0:00:00.027) 0:00:58.202 ****** 2026-01-22 06:45:36,670 p=31191 u=zuul n=ansible | skipping: [controller] 2026-01-22 06:45:36,676 p=31191 u=zuul n=ansible | TASK [ci_setup : Get current /etc/redhat-release _raw_params=cat /etc/redhat-release] *** 2026-01-22 06:45:36,676 p=31191 u=zuul n=ansible | Thursday 22 January 2026 06:45:36 +0000 (0:00:00.024) 0:00:58.228 ****** 2026-01-22 06:45:36,676 p=31191 u=zuul n=ansible | Thursday 22 January 2026 06:45:36 +0000 (0:00:00.024) 0:00:58.227 ****** 2026-01-22 06:45:36,695 p=31191 u=zuul n=ansible | skipping: [controller] 2026-01-22 06:45:36,702 p=31191 u=zuul n=ansible | TASK [ci_setup : Print current /etc/redhat-release msg={{ _current_rh_release.stdout }}] *** 2026-01-22 06:45:36,702 p=31191 u=zuul n=ansible | Thursday 22 January 2026 06:45:36 +0000 (0:00:00.026) 0:00:58.254 ****** 2026-01-22 06:45:36,703 p=31191 u=zuul n=ansible | Thursday 22 January 2026 06:45:36 +0000 (0:00:00.026) 0:00:58.253 ****** 2026-01-22 06:45:36,722 p=31191 u=zuul n=ansible | skipping: [controller] 2026-01-22 06:45:36,729 p=31191 u=zuul n=ansible | TASK [ci_setup : Ensure the repos are enabled in the system using yum name={{ item.name }}, baseurl={{ item.baseurl }}, description={{ item.description | default(item.name) }}, gpgcheck={{ item.gpgcheck | default(false) }}, enabled=True, state={{ yum_repo_state | default('present') }}] *** 2026-01-22 06:45:36,729 p=31191 u=zuul n=ansible | Thursday 22 January 2026 06:45:36 +0000 (0:00:00.026) 0:00:58.281 ****** 2026-01-22 06:45:36,729 p=31191 u=zuul n=ansible | Thursday 22 January 2026 06:45:36 +0000 (0:00:00.026) 0:00:58.279 ****** 2026-01-22 06:45:36,758 p=31191 u=zuul n=ansible | skipping: [controller] 2026-01-22 06:45:36,765 p=31191 u=zuul n=ansible | TASK [ci_setup : Manage directories path={{ item }}, state={{ directory_state }}, mode=0755, owner={{ ansible_user_id }}, group={{ ansible_user_id }}] *** 2026-01-22 06:45:36,765 p=31191 u=zuul n=ansible | Thursday 22 January 2026 06:45:36 +0000 (0:00:00.036) 0:00:58.317 ****** 2026-01-22 06:45:36,765 p=31191 u=zuul n=ansible | Thursday 22 January 2026 06:45:36 +0000 (0:00:00.036) 0:00:58.316 ****** 2026-01-22 06:45:37,067 p=31191 u=zuul n=ansible | changed: [controller] => (item=/home/zuul/ci-framework-data/artifacts/manifests/openstack/cr) 2026-01-22 06:45:37,325 p=31191 u=zuul n=ansible | changed: [controller] => (item=/home/zuul/ci-framework-data/logs) 2026-01-22 06:45:37,576 p=31191 u=zuul n=ansible | ok: [controller] => (item=/home/zuul/ci-framework-data/tmp) 2026-01-22 06:45:37,830 p=31191 u=zuul n=ansible | changed: [controller] => (item=/home/zuul/ci-framework-data/volumes) 2026-01-22 06:45:38,100 p=31191 u=zuul n=ansible | ok: [controller] => (item=/home/zuul/ci-framework-data/artifacts/parameters) 2026-01-22 06:45:38,118 p=31191 u=zuul n=ansible | TASK [Prepare install_yamls make targets name=install_yamls, apply={'tags': ['bootstrap']}] *** 2026-01-22 06:45:38,118 p=31191 u=zuul n=ansible | Thursday 22 January 2026 06:45:38 +0000 (0:00:01.352) 0:00:59.670 ****** 2026-01-22 06:45:38,118 p=31191 u=zuul n=ansible | Thursday 22 January 2026 06:45:38 +0000 (0:00:01.353) 0:00:59.669 ****** 2026-01-22 06:45:38,260 p=31191 u=zuul n=ansible | TASK [install_yamls : Ensure directories exist path={{ item }}, state=directory, mode=0755] *** 2026-01-22 06:45:38,261 p=31191 u=zuul n=ansible | Thursday 22 January 2026 06:45:38 +0000 (0:00:00.142) 0:00:59.813 ****** 2026-01-22 06:45:38,261 p=31191 u=zuul n=ansible | Thursday 22 January 2026 06:45:38 +0000 (0:00:00.142) 0:00:59.811 ****** 2026-01-22 06:45:38,527 p=31191 u=zuul n=ansible | ok: [controller] => (item=/home/zuul/ci-framework-data/artifacts) 2026-01-22 06:45:38,737 p=31191 u=zuul n=ansible | changed: [controller] => (item=/home/zuul/ci-framework-data/artifacts/roles/install_yamls_makes/tasks) 2026-01-22 06:45:38,952 p=31191 u=zuul n=ansible | ok: [controller] => (item=/home/zuul/ci-framework-data/artifacts/parameters) 2026-01-22 06:45:38,961 p=31191 u=zuul n=ansible | TASK [Create variables with local repos based on Zuul items name=install_yamls, tasks_from=zuul_set_operators_repo.yml] *** 2026-01-22 06:45:38,961 p=31191 u=zuul n=ansible | Thursday 22 January 2026 06:45:38 +0000 (0:00:00.700) 0:01:00.513 ****** 2026-01-22 06:45:38,961 p=31191 u=zuul n=ansible | Thursday 22 January 2026 06:45:38 +0000 (0:00:00.700) 0:01:00.511 ****** 2026-01-22 06:45:39,001 p=31191 u=zuul n=ansible | TASK [install_yamls : Set fact with local repos based on Zuul items cifmw_install_yamls_operators_repo={{ cifmw_install_yamls_operators_repo | default({}) | combine(_repo_operator_info | items2dict) }}] *** 2026-01-22 06:45:39,001 p=31191 u=zuul n=ansible | Thursday 22 January 2026 06:45:39 +0000 (0:00:00.040) 0:01:00.553 ****** 2026-01-22 06:45:39,001 p=31191 u=zuul n=ansible | Thursday 22 January 2026 06:45:39 +0000 (0:00:00.040) 0:01:00.552 ****** 2026-01-22 06:45:39,058 p=31191 u=zuul n=ansible | ok: [controller] => (item={'branch': 'main', 'change': '320', 'change_url': 'https://github.com/openstack-k8s-operators/watcher-operator/pull/320', 'commit_id': '2bc4688cca96552e6b25883a5eb5cc7a0447d6d9', 'patchset': '2bc4688cca96552e6b25883a5eb5cc7a0447d6d9', 'project': {'canonical_hostname': 'github.com', 'canonical_name': 'github.com/openstack-k8s-operators/watcher-operator', 'name': 'openstack-k8s-operators/watcher-operator', 'short_name': 'watcher-operator', 'src_dir': 'src/github.com/openstack-k8s-operators/watcher-operator'}, 'topic': None}) 2026-01-22 06:45:39,066 p=31191 u=zuul n=ansible | TASK [install_yamls : Print helpful data for debugging msg=_repo_operator_name: {{ _repo_operator_name }} _repo_operator_info: {{ _repo_operator_info }} cifmw_install_yamls_operators_repo: {{ cifmw_install_yamls_operators_repo }} ] *** 2026-01-22 06:45:39,066 p=31191 u=zuul n=ansible | Thursday 22 January 2026 06:45:39 +0000 (0:00:00.064) 0:01:00.618 ****** 2026-01-22 06:45:39,066 p=31191 u=zuul n=ansible | Thursday 22 January 2026 06:45:39 +0000 (0:00:00.064) 0:01:00.617 ****** 2026-01-22 06:45:39,116 p=31191 u=zuul n=ansible | ok: [controller] => (item={'branch': 'main', 'change': '320', 'change_url': 'https://github.com/openstack-k8s-operators/watcher-operator/pull/320', 'commit_id': '2bc4688cca96552e6b25883a5eb5cc7a0447d6d9', 'patchset': '2bc4688cca96552e6b25883a5eb5cc7a0447d6d9', 'project': {'canonical_hostname': 'github.com', 'canonical_name': 'github.com/openstack-k8s-operators/watcher-operator', 'name': 'openstack-k8s-operators/watcher-operator', 'short_name': 'watcher-operator', 'src_dir': 'src/github.com/openstack-k8s-operators/watcher-operator'}, 'topic': None}) => msg: | _repo_operator_name: watcher _repo_operator_info: [{'key': 'WATCHER_REPO', 'value': '/home/zuul/src/github.com/openstack-k8s-operators/watcher-operator'}, {'key': 'WATCHER_BRANCH', 'value': ''}] cifmw_install_yamls_operators_repo: {'WATCHER_REPO': '/home/zuul/src/github.com/openstack-k8s-operators/watcher-operator', 'WATCHER_BRANCH': ''} 2026-01-22 06:45:39,133 p=31191 u=zuul n=ansible | TASK [Customize install_yamls devsetup vars if needed name=install_yamls, tasks_from=customize_devsetup_vars.yml] *** 2026-01-22 06:45:39,133 p=31191 u=zuul n=ansible | Thursday 22 January 2026 06:45:39 +0000 (0:00:00.066) 0:01:00.685 ****** 2026-01-22 06:45:39,133 p=31191 u=zuul n=ansible | Thursday 22 January 2026 06:45:39 +0000 (0:00:00.066) 0:01:00.683 ****** 2026-01-22 06:45:39,228 p=31191 u=zuul n=ansible | TASK [install_yamls : Update opm_version in install_yamls devsetup/vars/default.yaml path={{ cifmw_install_yamls_repo }}/devsetup/vars/default.yaml, regexp=^opm_version:, line=opm_version: {{ cifmw_install_yamls_opm_version }}, state=present] *** 2026-01-22 06:45:39,228 p=31191 u=zuul n=ansible | Thursday 22 January 2026 06:45:39 +0000 (0:00:00.095) 0:01:00.780 ****** 2026-01-22 06:45:39,228 p=31191 u=zuul n=ansible | Thursday 22 January 2026 06:45:39 +0000 (0:00:00.095) 0:01:00.779 ****** 2026-01-22 06:45:39,269 p=31191 u=zuul n=ansible | skipping: [controller] 2026-01-22 06:45:39,276 p=31191 u=zuul n=ansible | TASK [install_yamls : Update sdk_version in install_yamls devsetup/vars/default.yaml path={{ cifmw_install_yamls_repo }}/devsetup/vars/default.yaml, regexp=^sdk_version:, line=sdk_version: {{ cifmw_install_yamls_sdk_version }}, state=present] *** 2026-01-22 06:45:39,276 p=31191 u=zuul n=ansible | Thursday 22 January 2026 06:45:39 +0000 (0:00:00.047) 0:01:00.828 ****** 2026-01-22 06:45:39,276 p=31191 u=zuul n=ansible | Thursday 22 January 2026 06:45:39 +0000 (0:00:00.047) 0:01:00.827 ****** 2026-01-22 06:45:39,313 p=31191 u=zuul n=ansible | skipping: [controller] 2026-01-22 06:45:39,323 p=31191 u=zuul n=ansible | TASK [install_yamls : Update go_version in install_yamls devsetup/vars/default.yaml path={{ cifmw_install_yamls_repo }}/devsetup/vars/default.yaml, regexp=^go_version:, line=go_version: {{ cifmw_install_yamls_go_version }}, state=present] *** 2026-01-22 06:45:39,323 p=31191 u=zuul n=ansible | Thursday 22 January 2026 06:45:39 +0000 (0:00:00.046) 0:01:00.875 ****** 2026-01-22 06:45:39,323 p=31191 u=zuul n=ansible | Thursday 22 January 2026 06:45:39 +0000 (0:00:00.046) 0:01:00.874 ****** 2026-01-22 06:45:39,350 p=31191 u=zuul n=ansible | skipping: [controller] 2026-01-22 06:45:39,361 p=31191 u=zuul n=ansible | TASK [install_yamls : Update kustomize_version in install_yamls devsetup/vars/default.yaml path={{ cifmw_install_yamls_repo }}/devsetup/vars/default.yaml, regexp=^kustomize_version:, line=kustomize_version: {{ cifmw_install_yamls_kustomize_version }}, state=present] *** 2026-01-22 06:45:39,361 p=31191 u=zuul n=ansible | Thursday 22 January 2026 06:45:39 +0000 (0:00:00.037) 0:01:00.913 ****** 2026-01-22 06:45:39,361 p=31191 u=zuul n=ansible | Thursday 22 January 2026 06:45:39 +0000 (0:00:00.037) 0:01:00.911 ****** 2026-01-22 06:45:39,386 p=31191 u=zuul n=ansible | skipping: [controller] 2026-01-22 06:45:39,398 p=31191 u=zuul n=ansible | TASK [install_yamls : Compute the cifmw_install_yamls_vars final value _install_yamls_override_vars={{ _install_yamls_override_vars | default({}) | combine(item, recursive=True) }}] *** 2026-01-22 06:45:39,398 p=31191 u=zuul n=ansible | Thursday 22 January 2026 06:45:39 +0000 (0:00:00.037) 0:01:00.950 ****** 2026-01-22 06:45:39,398 p=31191 u=zuul n=ansible | Thursday 22 January 2026 06:45:39 +0000 (0:00:00.037) 0:01:00.949 ****** 2026-01-22 06:45:39,480 p=31191 u=zuul n=ansible | ok: [controller] => (item={'BMO_SETUP': False}) 2026-01-22 06:45:39,489 p=31191 u=zuul n=ansible | TASK [install_yamls : Set environment override cifmw_install_yamls_environment fact cifmw_install_yamls_environment={{ _install_yamls_override_vars.keys() | map('upper') | zip(_install_yamls_override_vars.values()) | items2dict(key_name=0, value_name=1) | combine({ 'OUT': cifmw_install_yamls_manifests_dir, 'OUTPUT_DIR': cifmw_install_yamls_edpm_dir, 'CHECKOUT_FROM_OPENSTACK_REF': cifmw_install_yamls_checkout_openstack_ref, 'OPENSTACK_K8S_BRANCH': (zuul is defined and not zuul.branch |regex_search('master|antelope|rhos')) | ternary(zuul.branch, 'main') }) | combine(install_yamls_operators_repos) }}, cacheable=True] *** 2026-01-22 06:45:39,490 p=31191 u=zuul n=ansible | Thursday 22 January 2026 06:45:39 +0000 (0:00:00.091) 0:01:01.041 ****** 2026-01-22 06:45:39,490 p=31191 u=zuul n=ansible | Thursday 22 January 2026 06:45:39 +0000 (0:00:00.091) 0:01:01.040 ****** 2026-01-22 06:45:39,535 p=31191 u=zuul n=ansible | ok: [controller] 2026-01-22 06:45:39,542 p=31191 u=zuul n=ansible | TASK [install_yamls : Get environment structure base_path={{ cifmw_install_yamls_repo }}] *** 2026-01-22 06:45:39,542 p=31191 u=zuul n=ansible | Thursday 22 January 2026 06:45:39 +0000 (0:00:00.052) 0:01:01.094 ****** 2026-01-22 06:45:39,542 p=31191 u=zuul n=ansible | Thursday 22 January 2026 06:45:39 +0000 (0:00:00.052) 0:01:01.092 ****** 2026-01-22 06:45:40,189 p=31191 u=zuul n=ansible | ok: [controller] 2026-01-22 06:45:40,198 p=31191 u=zuul n=ansible | TASK [install_yamls : Ensure Output directory exists path={{ cifmw_install_yamls_out_dir }}, state=directory, mode=0755] *** 2026-01-22 06:45:40,198 p=31191 u=zuul n=ansible | Thursday 22 January 2026 06:45:40 +0000 (0:00:00.655) 0:01:01.750 ****** 2026-01-22 06:45:40,198 p=31191 u=zuul n=ansible | Thursday 22 January 2026 06:45:40 +0000 (0:00:00.655) 0:01:01.748 ****** 2026-01-22 06:45:40,461 p=31191 u=zuul n=ansible | ok: [controller] 2026-01-22 06:45:40,471 p=31191 u=zuul n=ansible | TASK [install_yamls : Ensure user cifmw_install_yamls_vars contains existing Makefile variables that=_cifmw_install_yamls_unmatched_vars | length == 0, msg=cifmw_install_yamls_vars contains a variable that is not defined in install_yamls Makefile nor cifmw_install_yamls_whitelisted_vars: {{ _cifmw_install_yamls_unmatched_vars | join(', ')}}, quiet=True] *** 2026-01-22 06:45:40,471 p=31191 u=zuul n=ansible | Thursday 22 January 2026 06:45:40 +0000 (0:00:00.273) 0:01:02.023 ****** 2026-01-22 06:45:40,472 p=31191 u=zuul n=ansible | Thursday 22 January 2026 06:45:40 +0000 (0:00:00.273) 0:01:02.022 ****** 2026-01-22 06:45:40,509 p=31191 u=zuul n=ansible | ok: [controller] 2026-01-22 06:45:40,524 p=31191 u=zuul n=ansible | TASK [install_yamls : Generate /home/zuul/ci-framework-data/artifacts/install_yamls.sh dest={{ cifmw_install_yamls_out_dir }}/{{ cifmw_install_yamls_envfile }}, content={% for k,v in cifmw_install_yamls_environment.items() %} export {{ k }}={{ v }} {% endfor %}, mode=0644] *** 2026-01-22 06:45:40,524 p=31191 u=zuul n=ansible | Thursday 22 January 2026 06:45:40 +0000 (0:00:00.052) 0:01:02.076 ****** 2026-01-22 06:45:40,524 p=31191 u=zuul n=ansible | Thursday 22 January 2026 06:45:40 +0000 (0:00:00.052) 0:01:02.075 ****** 2026-01-22 06:45:41,269 p=31191 u=zuul n=ansible | changed: [controller] 2026-01-22 06:45:41,275 p=31191 u=zuul n=ansible | TASK [install_yamls : Set install_yamls default values cifmw_install_yamls_defaults={{ get_makefiles_env_output.makefiles_values | combine(cifmw_install_yamls_environment) }}, cacheable=True] *** 2026-01-22 06:45:41,275 p=31191 u=zuul n=ansible | Thursday 22 January 2026 06:45:41 +0000 (0:00:00.750) 0:01:02.827 ****** 2026-01-22 06:45:41,275 p=31191 u=zuul n=ansible | Thursday 22 January 2026 06:45:41 +0000 (0:00:00.750) 0:01:02.826 ****** 2026-01-22 06:45:41,305 p=31191 u=zuul n=ansible | ok: [controller] 2026-01-22 06:45:41,313 p=31191 u=zuul n=ansible | TASK [install_yamls : Show the env structure var=cifmw_install_yamls_environment] *** 2026-01-22 06:45:41,313 p=31191 u=zuul n=ansible | Thursday 22 January 2026 06:45:41 +0000 (0:00:00.038) 0:01:02.865 ****** 2026-01-22 06:45:41,313 p=31191 u=zuul n=ansible | Thursday 22 January 2026 06:45:41 +0000 (0:00:00.038) 0:01:02.864 ****** 2026-01-22 06:45:41,336 p=31191 u=zuul n=ansible | ok: [controller] => cifmw_install_yamls_environment: BMO_SETUP: false CHECKOUT_FROM_OPENSTACK_REF: 'true' OPENSTACK_K8S_BRANCH: main OUT: /home/zuul/ci-framework-data/artifacts/manifests OUTPUT_DIR: /home/zuul/ci-framework-data/artifacts/edpm WATCHER_BRANCH: '' WATCHER_REPO: /home/zuul/src/github.com/openstack-k8s-operators/watcher-operator 2026-01-22 06:45:41,344 p=31191 u=zuul n=ansible | TASK [install_yamls : Show the env structure defaults var=cifmw_install_yamls_defaults] *** 2026-01-22 06:45:41,344 p=31191 u=zuul n=ansible | Thursday 22 January 2026 06:45:41 +0000 (0:00:00.030) 0:01:02.896 ****** 2026-01-22 06:45:41,344 p=31191 u=zuul n=ansible | Thursday 22 January 2026 06:45:41 +0000 (0:00:00.030) 0:01:02.894 ****** 2026-01-22 06:45:41,380 p=31191 u=zuul n=ansible | ok: [controller] => cifmw_install_yamls_defaults: ADOPTED_EXTERNAL_NETWORK: 172.21.1.0/24 ADOPTED_INTERNALAPI_NETWORK: 172.17.1.0/24 ADOPTED_STORAGEMGMT_NETWORK: 172.20.1.0/24 ADOPTED_STORAGE_NETWORK: 172.18.1.0/24 ADOPTED_TENANT_NETWORK: 172.9.1.0/24 ANSIBLEEE: config/samples/_v1beta1_ansibleee.yaml ANSIBLEEE_BRANCH: main ANSIBLEEE_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/openstack-ansibleee-operator/config/samples/_v1beta1_ansibleee.yaml ANSIBLEEE_IMG: quay.io/openstack-k8s-operators/openstack-ansibleee-operator-index:latest ANSIBLEEE_KUTTL_CONF: /home/zuul/ci-framework-data/artifacts/manifests/operator/openstack-ansibleee-operator/kuttl-test.yaml ANSIBLEEE_KUTTL_DIR: /home/zuul/ci-framework-data/artifacts/manifests/operator/openstack-ansibleee-operator/test/kuttl/tests ANSIBLEEE_KUTTL_NAMESPACE: ansibleee-kuttl-tests ANSIBLEEE_REPO: https://github.com/openstack-k8s-operators/openstack-ansibleee-operator ANSIBLEE_COMMIT_HASH: '' BARBICAN: config/samples/barbican_v1beta1_barbican.yaml BARBICAN_BRANCH: main BARBICAN_COMMIT_HASH: '' BARBICAN_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/barbican-operator/config/samples/barbican_v1beta1_barbican.yaml BARBICAN_DEPL_IMG: unused BARBICAN_IMG: quay.io/openstack-k8s-operators/barbican-operator-index:latest BARBICAN_KUTTL_CONF: /home/zuul/ci-framework-data/artifacts/manifests/operator/barbican-operator/kuttl-test.yaml BARBICAN_KUTTL_DIR: /home/zuul/ci-framework-data/artifacts/manifests/operator/barbican-operator/test/kuttl/tests BARBICAN_KUTTL_NAMESPACE: barbican-kuttl-tests BARBICAN_REPO: https://github.com/openstack-k8s-operators/barbican-operator.git BARBICAN_SERVICE_ENABLED: 'true' BARBICAN_SIMPLE_CRYPTO_ENCRYPTION_KEY: sE**********U= BAREMETAL_BRANCH: main BAREMETAL_COMMIT_HASH: '' BAREMETAL_IMG: quay.io/openstack-k8s-operators/openstack-baremetal-operator-index:latest BAREMETAL_OS_CONTAINER_IMG: '' BAREMETAL_OS_IMG: '' BAREMETAL_OS_IMG_TYPE: '' BAREMETAL_REPO: https://github.com/openstack-k8s-operators/openstack-baremetal-operator.git BAREMETAL_TIMEOUT: 20m BASH_IMG: quay.io/openstack-k8s-operators/bash:latest BGP_ASN: '64999' BGP_LEAF_1: 100.65.4.1 BGP_LEAF_2: 100.64.4.1 BGP_OVN_ROUTING: 'false' BGP_PEER_ASN: '64999' BGP_SOURCE_IP: 172.30.4.2 BGP_SOURCE_IP6: f00d:f00d:f00d:f00d:f00d:f00d:f00d:42 BMAAS_BRIDGE_IPV4_PREFIX: 172.20.1.2/24 BMAAS_BRIDGE_IPV6_PREFIX: fd00:bbbb::2/64 BMAAS_INSTANCE_DISK_SIZE: '20' BMAAS_INSTANCE_MEMORY: '4096' BMAAS_INSTANCE_NAME_PREFIX: crc-bmaas BMAAS_INSTANCE_NET_MODEL: virtio BMAAS_INSTANCE_OS_VARIANT: centos-stream9 BMAAS_INSTANCE_VCPUS: '2' BMAAS_INSTANCE_VIRT_TYPE: kvm BMAAS_IPV4: 'true' BMAAS_IPV6: 'false' BMAAS_LIBVIRT_USER: sushyemu BMAAS_METALLB_ADDRESS_POOL: 172.20.1.64/26 BMAAS_METALLB_POOL_NAME: baremetal BMAAS_NETWORK_IPV4_PREFIX: 172.20.1.1/24 BMAAS_NETWORK_IPV6_PREFIX: fd00:bbbb::1/64 BMAAS_NETWORK_NAME: crc-bmaas BMAAS_NODE_COUNT: '1' BMAAS_OCP_INSTANCE_NAME: crc BMAAS_REDFISH_PASSWORD: password BMAAS_REDFISH_USERNAME: admin BMAAS_ROUTE_LIBVIRT_NETWORKS: crc-bmaas,crc,default BMAAS_SUSHY_EMULATOR_DRIVER: libvirt BMAAS_SUSHY_EMULATOR_IMAGE: quay.io/metal3-io/sushy-tools:latest BMAAS_SUSHY_EMULATOR_NAMESPACE: sushy-emulator BMAAS_SUSHY_EMULATOR_OS_CLIENT_CONFIG_FILE: /etc/openstack/clouds.yaml BMAAS_SUSHY_EMULATOR_OS_CLOUD: openstack BMH_NAMESPACE: openstack BMO_BRANCH: release-0.9 BMO_CLEANUP: 'true' BMO_COMMIT_HASH: '' BMO_IPA_BRANCH: stable/2024.1 BMO_IRONIC_HOST: 192.168.122.10 BMO_PROVISIONING_INTERFACE: '' BMO_REPO: https://github.com/metal3-io/baremetal-operator BMO_SETUP: false BMO_SETUP_ROUTE_REPLACE: 'true' BM_CTLPLANE_INTERFACE: enp1s0 BM_INSTANCE_MEMORY: '8192' BM_INSTANCE_NAME_PREFIX: edpm-compute-baremetal BM_INSTANCE_NAME_SUFFIX: '0' BM_NETWORK_NAME: default BM_NODE_COUNT: '1' BM_ROOT_PASSWORD: '' BM_ROOT_PASSWORD_SECRET: '' CEILOMETER_CENTRAL_DEPL_IMG: unused CEILOMETER_NOTIFICATION_DEPL_IMG: unused CEPH_BRANCH: release-1.15 CEPH_CLIENT: /home/zuul/ci-framework-data/artifacts/manifests/operator/rook/deploy/examples/toolbox.yaml CEPH_COMMON: /home/zuul/ci-framework-data/artifacts/manifests/operator/rook/deploy/examples/common.yaml CEPH_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/rook/deploy/examples/cluster-test.yaml CEPH_CRDS: /home/zuul/ci-framework-data/artifacts/manifests/operator/rook/deploy/examples/crds.yaml CEPH_IMG: quay.io/ceph/demo:latest-squid CEPH_OP: /home/zuul/ci-framework-data/artifacts/manifests/operator/rook/deploy/examples/operator-openshift.yaml CEPH_REPO: https://github.com/rook/rook.git CERTMANAGER_TIMEOUT: 300s CHECKOUT_FROM_OPENSTACK_REF: 'true' CINDER: config/samples/cinder_v1beta1_cinder.yaml CINDERAPI_DEPL_IMG: unused CINDERBKP_DEPL_IMG: unused CINDERSCH_DEPL_IMG: unused CINDERVOL_DEPL_IMG: unused CINDER_BRANCH: main CINDER_COMMIT_HASH: '' CINDER_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/cinder-operator/config/samples/cinder_v1beta1_cinder.yaml CINDER_IMG: quay.io/openstack-k8s-operators/cinder-operator-index:latest CINDER_KUTTL_CONF: /home/zuul/ci-framework-data/artifacts/manifests/operator/cinder-operator/kuttl-test.yaml CINDER_KUTTL_DIR: /home/zuul/ci-framework-data/artifacts/manifests/operator/cinder-operator/test/kuttl/tests CINDER_KUTTL_NAMESPACE: cinder-kuttl-tests CINDER_REPO: https://github.com/openstack-k8s-operators/cinder-operator.git CLEANUP_DIR_CMD: rm -Rf CRC_BGP_NIC_1_MAC: '52:54:00:11:11:11' CRC_BGP_NIC_2_MAC: '52:54:00:11:11:12' CRC_HTTPS_PROXY: '' CRC_HTTP_PROXY: '' CRC_STORAGE_NAMESPACE: crc-storage CRC_STORAGE_RETRIES: '3' CRC_URL: '''https://developers.redhat.com/content-gateway/rest/mirror/pub/openshift-v4/clients/crc/latest/crc-linux-amd64.tar.xz''' CRC_VERSION: latest DATAPLANE_ANSIBLE_SECRET: dataplane-ansible-ssh-private-key-secret DATAPLANE_ANSIBLE_USER: '' DATAPLANE_COMPUTE_IP: 192.168.122.100 DATAPLANE_CONTAINER_PREFIX: openstack DATAPLANE_CONTAINER_TAG: current-podified DATAPLANE_CUSTOM_SERVICE_RUNNER_IMG: quay.io/openstack-k8s-operators/openstack-ansibleee-runner:latest DATAPLANE_DEFAULT_GW: 192.168.122.1 DATAPLANE_EXTRA_NOVA_CONFIG_FILE: /dev/null DATAPLANE_GROWVOLS_ARGS: /=8GB /tmp=1GB /home=1GB /var=100% DATAPLANE_KUSTOMIZE_SCENARIO: preprovisioned DATAPLANE_NETWORKER_IP: 192.168.122.200 DATAPLANE_NETWORK_INTERFACE_NAME: eth0 DATAPLANE_NOVA_NFS_PATH: '' DATAPLANE_NTP_SERVER: pool.ntp.org DATAPLANE_PLAYBOOK: osp.edpm.download_cache DATAPLANE_REGISTRY_URL: quay.io/podified-antelope-centos9 DATAPLANE_RUNNER_IMG: '' DATAPLANE_SERVER_ROLE: compute DATAPLANE_SSHD_ALLOWED_RANGES: '[''192.168.122.0/24'']' DATAPLANE_TIMEOUT: 30m DATAPLANE_TLS_ENABLED: 'true' DATAPLANE_TOTAL_NETWORKER_NODES: '1' DATAPLANE_TOTAL_NODES: '1' DBSERVICE: galera DESIGNATE: config/samples/designate_v1beta1_designate.yaml DESIGNATE_BRANCH: main DESIGNATE_COMMIT_HASH: '' DESIGNATE_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/designate-operator/config/samples/designate_v1beta1_designate.yaml DESIGNATE_IMG: quay.io/openstack-k8s-operators/designate-operator-index:latest DESIGNATE_KUTTL_CONF: /home/zuul/ci-framework-data/artifacts/manifests/operator/designate-operator/kuttl-test.yaml DESIGNATE_KUTTL_DIR: /home/zuul/ci-framework-data/artifacts/manifests/operator/designate-operator/test/kuttl/tests DESIGNATE_KUTTL_NAMESPACE: designate-kuttl-tests DESIGNATE_REPO: https://github.com/openstack-k8s-operators/designate-operator.git DNSDATA: config/samples/network_v1beta1_dnsdata.yaml DNSDATA_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/infra-operator/config/samples/network_v1beta1_dnsdata.yaml DNSMASQ: config/samples/network_v1beta1_dnsmasq.yaml DNSMASQ_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/infra-operator/config/samples/network_v1beta1_dnsmasq.yaml DNS_DEPL_IMG: unused DNS_DOMAIN: localdomain DOWNLOAD_TOOLS_SELECTION: all EDPM_ATTACH_EXTNET: 'true' EDPM_COMPUTE_ADDITIONAL_HOST_ROUTES: '''[]''' EDPM_COMPUTE_ADDITIONAL_NETWORKS: '''[]''' EDPM_COMPUTE_CELLS: '1' EDPM_COMPUTE_CEPH_ENABLED: 'true' EDPM_COMPUTE_CEPH_NOVA: 'true' EDPM_COMPUTE_DHCP_AGENT_ENABLED: 'true' EDPM_COMPUTE_SRIOV_ENABLED: 'true' EDPM_COMPUTE_SUFFIX: '0' EDPM_CONFIGURE_DEFAULT_ROUTE: 'true' EDPM_CONFIGURE_HUGEPAGES: 'false' EDPM_CONFIGURE_NETWORKING: 'true' EDPM_FIRSTBOOT_EXTRA: /tmp/edpm-firstboot-extra EDPM_NETWORKER_SUFFIX: '0' EDPM_TOTAL_NETWORKERS: '1' EDPM_TOTAL_NODES: '1' GALERA_REPLICAS: '' GENERATE_SSH_KEYS: 'true' GIT_CLONE_OPTS: '' GLANCE: config/samples/glance_v1beta1_glance.yaml GLANCEAPI_DEPL_IMG: unused GLANCE_BRANCH: main GLANCE_COMMIT_HASH: '' GLANCE_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/glance-operator/config/samples/glance_v1beta1_glance.yaml GLANCE_IMG: quay.io/openstack-k8s-operators/glance-operator-index:latest GLANCE_KUTTL_CONF: /home/zuul/ci-framework-data/artifacts/manifests/operator/glance-operator/kuttl-test.yaml GLANCE_KUTTL_DIR: /home/zuul/ci-framework-data/artifacts/manifests/operator/glance-operator/test/kuttl/tests GLANCE_KUTTL_NAMESPACE: glance-kuttl-tests GLANCE_REPO: https://github.com/openstack-k8s-operators/glance-operator.git HEAT: config/samples/heat_v1beta1_heat.yaml HEATAPI_DEPL_IMG: unused HEATCFNAPI_DEPL_IMG: unused HEATENGINE_DEPL_IMG: unused HEAT_AUTH_ENCRYPTION_KEY: 76**********f0 HEAT_BRANCH: main HEAT_COMMIT_HASH: '' HEAT_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/heat-operator/config/samples/heat_v1beta1_heat.yaml HEAT_IMG: quay.io/openstack-k8s-operators/heat-operator-index:latest HEAT_KUTTL_CONF: /home/zuul/ci-framework-data/artifacts/manifests/operator/heat-operator/kuttl-test.yaml HEAT_KUTTL_DIR: /home/zuul/ci-framework-data/artifacts/manifests/operator/heat-operator/test/kuttl/tests HEAT_KUTTL_NAMESPACE: heat-kuttl-tests HEAT_REPO: https://github.com/openstack-k8s-operators/heat-operator.git HEAT_SERVICE_ENABLED: 'true' HORIZON: config/samples/horizon_v1beta1_horizon.yaml HORIZON_BRANCH: main HORIZON_COMMIT_HASH: '' HORIZON_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/horizon-operator/config/samples/horizon_v1beta1_horizon.yaml HORIZON_DEPL_IMG: unused HORIZON_IMG: quay.io/openstack-k8s-operators/horizon-operator-index:latest HORIZON_KUTTL_CONF: /home/zuul/ci-framework-data/artifacts/manifests/operator/horizon-operator/kuttl-test.yaml HORIZON_KUTTL_DIR: /home/zuul/ci-framework-data/artifacts/manifests/operator/horizon-operator/test/kuttl/tests HORIZON_KUTTL_NAMESPACE: horizon-kuttl-tests HORIZON_REPO: https://github.com/openstack-k8s-operators/horizon-operator.git INFRA_BRANCH: main INFRA_COMMIT_HASH: '' INFRA_IMG: quay.io/openstack-k8s-operators/infra-operator-index:latest INFRA_KUTTL_CONF: /home/zuul/ci-framework-data/artifacts/manifests/operator/infra-operator/kuttl-test.yaml INFRA_KUTTL_DIR: /home/zuul/ci-framework-data/artifacts/manifests/operator/infra-operator/test/kuttl/tests INFRA_KUTTL_NAMESPACE: infra-kuttl-tests INFRA_REPO: https://github.com/openstack-k8s-operators/infra-operator.git INSTALL_CERT_MANAGER: 'true' INSTALL_NMSTATE: true || false INSTALL_NNCP: true || false INTERNALAPI_HOST_ROUTES: '' IPV6_LAB_IPV4_NETWORK_IPADDRESS: 172.30.0.1/24 IPV6_LAB_IPV6_NETWORK_IPADDRESS: fd00:abcd:abcd:fc00::1/64 IPV6_LAB_LIBVIRT_STORAGE_POOL: default IPV6_LAB_MANAGE_FIREWALLD: 'true' IPV6_LAB_NAT64_HOST_IPV4: 172.30.0.2/24 IPV6_LAB_NAT64_HOST_IPV6: fd00:abcd:abcd:fc00::2/64 IPV6_LAB_NAT64_INSTANCE_NAME: nat64-router IPV6_LAB_NAT64_IPV6_NETWORK: fd00:abcd:abcd:fc00::/64 IPV6_LAB_NAT64_TAYGA_DYNAMIC_POOL: 192.168.255.0/24 IPV6_LAB_NAT64_TAYGA_IPV4: 192.168.255.1 IPV6_LAB_NAT64_TAYGA_IPV6: fd00:abcd:abcd:fc00::3 IPV6_LAB_NAT64_TAYGA_IPV6_PREFIX: fd00:abcd:abcd:fcff::/96 IPV6_LAB_NAT64_UPDATE_PACKAGES: 'false' IPV6_LAB_NETWORK_NAME: nat64 IPV6_LAB_SNO_CLUSTER_NETWORK: fd00:abcd:0::/48 IPV6_LAB_SNO_HOST_IP: fd00:abcd:abcd:fc00::11 IPV6_LAB_SNO_HOST_PREFIX: '64' IPV6_LAB_SNO_INSTANCE_NAME: sno IPV6_LAB_SNO_MACHINE_NETWORK: fd00:abcd:abcd:fc00::/64 IPV6_LAB_SNO_OCP_MIRROR_URL: https://mirror.openshift.com/pub/openshift-v4/clients/ocp IPV6_LAB_SNO_OCP_VERSION: latest-4.14 IPV6_LAB_SNO_SERVICE_NETWORK: fd00:abcd:abcd:fc03::/112 IPV6_LAB_SSH_PUB_KEY: /home/zuul/.ssh/id_rsa.pub IPV6_LAB_WORK_DIR: /home/zuul/.ipv6lab IRONIC: config/samples/ironic_v1beta1_ironic.yaml IRONICAPI_DEPL_IMG: unused IRONICCON_DEPL_IMG: unused IRONICINS_DEPL_IMG: unused IRONICNAG_DEPL_IMG: unused IRONICPXE_DEPL_IMG: unused IRONIC_BRANCH: main IRONIC_COMMIT_HASH: '' IRONIC_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/ironic-operator/config/samples/ironic_v1beta1_ironic.yaml IRONIC_IMAGE: quay.io/metal3-io/ironic IRONIC_IMAGE_TAG: release-24.1 IRONIC_IMG: quay.io/openstack-k8s-operators/ironic-operator-index:latest IRONIC_KUTTL_CONF: /home/zuul/ci-framework-data/artifacts/manifests/operator/ironic-operator/kuttl-test.yaml IRONIC_KUTTL_DIR: /home/zuul/ci-framework-data/artifacts/manifests/operator/ironic-operator/test/kuttl/tests IRONIC_KUTTL_NAMESPACE: ironic-kuttl-tests IRONIC_REPO: https://github.com/openstack-k8s-operators/ironic-operator.git KEYSTONEAPI: config/samples/keystone_v1beta1_keystoneapi.yaml KEYSTONEAPI_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/keystone-operator/config/samples/keystone_v1beta1_keystoneapi.yaml KEYSTONEAPI_DEPL_IMG: unused KEYSTONE_BRANCH: main KEYSTONE_COMMIT_HASH: '' KEYSTONE_FEDERATION_CLIENT_SECRET: CO**********6f KEYSTONE_FEDERATION_CRYPTO_PASSPHRASE: openstack KEYSTONE_IMG: quay.io/openstack-k8s-operators/keystone-operator-index:latest KEYSTONE_KUTTL_CONF: /home/zuul/ci-framework-data/artifacts/manifests/operator/keystone-operator/kuttl-test.yaml KEYSTONE_KUTTL_DIR: /home/zuul/ci-framework-data/artifacts/manifests/operator/keystone-operator/test/kuttl/tests KEYSTONE_KUTTL_NAMESPACE: keystone-kuttl-tests KEYSTONE_REPO: https://github.com/openstack-k8s-operators/keystone-operator.git KUBEADMIN_PWD: '12345678' LIBVIRT_SECRET: libvirt-secret LOKI_DEPLOY_MODE: openshift-network LOKI_DEPLOY_NAMESPACE: netobserv LOKI_DEPLOY_SIZE: 1x.demo LOKI_NAMESPACE: openshift-operators-redhat LOKI_OPERATOR_GROUP: openshift-operators-redhat-loki LOKI_SUBSCRIPTION: loki-operator LVMS_CR: '1' MANILA: config/samples/manila_v1beta1_manila.yaml MANILAAPI_DEPL_IMG: unused MANILASCH_DEPL_IMG: unused MANILASHARE_DEPL_IMG: unused MANILA_BRANCH: main MANILA_COMMIT_HASH: '' MANILA_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/manila-operator/config/samples/manila_v1beta1_manila.yaml MANILA_IMG: quay.io/openstack-k8s-operators/manila-operator-index:latest MANILA_KUTTL_CONF: /home/zuul/ci-framework-data/artifacts/manifests/operator/manila-operator/kuttl-test.yaml MANILA_KUTTL_DIR: /home/zuul/ci-framework-data/artifacts/manifests/operator/manila-operator/test/kuttl/tests MANILA_KUTTL_NAMESPACE: manila-kuttl-tests MANILA_REPO: https://github.com/openstack-k8s-operators/manila-operator.git MANILA_SERVICE_ENABLED: 'true' MARIADB: config/samples/mariadb_v1beta1_galera.yaml MARIADB_BRANCH: main MARIADB_CHAINSAW_CONF: /home/zuul/ci-framework-data/artifacts/manifests/operator/mariadb-operator/test/chainsaw/config.yaml MARIADB_CHAINSAW_DIR: /home/zuul/ci-framework-data/artifacts/manifests/operator/mariadb-operator/test/chainsaw/tests MARIADB_CHAINSAW_NAMESPACE: mariadb-chainsaw-tests MARIADB_COMMIT_HASH: '' MARIADB_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/mariadb-operator/config/samples/mariadb_v1beta1_galera.yaml MARIADB_DEPL_IMG: unused MARIADB_IMG: quay.io/openstack-k8s-operators/mariadb-operator-index:latest MARIADB_KUTTL_CONF: /home/zuul/ci-framework-data/artifacts/manifests/operator/mariadb-operator/kuttl-test.yaml MARIADB_KUTTL_DIR: /home/zuul/ci-framework-data/artifacts/manifests/operator/mariadb-operator/test/kuttl/tests MARIADB_KUTTL_NAMESPACE: mariadb-kuttl-tests MARIADB_REPO: https://github.com/openstack-k8s-operators/mariadb-operator.git MEMCACHED: config/samples/memcached_v1beta1_memcached.yaml MEMCACHED_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/infra-operator/config/samples/memcached_v1beta1_memcached.yaml MEMCACHED_DEPL_IMG: unused METADATA_SHARED_SECRET: '12**********42' METALLB_IPV6_POOL: fd00:aaaa::80-fd00:aaaa::90 METALLB_POOL: 192.168.122.80-192.168.122.90 MICROSHIFT: '0' NAMESPACE: openstack NETCONFIG: config/samples/network_v1beta1_netconfig.yaml NETCONFIG_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/infra-operator/config/samples/network_v1beta1_netconfig.yaml NETCONFIG_DEPL_IMG: unused NETOBSERV_DEPLOY_NAMESPACE: netobserv NETOBSERV_NAMESPACE: openshift-netobserv-operator NETOBSERV_OPERATOR_GROUP: openshift-netobserv-operator-net NETOBSERV_SUBSCRIPTION: netobserv-operator NETWORK_BGP: 'false' NETWORK_DESIGNATE_ADDRESS_PREFIX: 172.28.0 NETWORK_DESIGNATE_EXT_ADDRESS_PREFIX: 172.50.0 NETWORK_INTERNALAPI_ADDRESS_PREFIX: 172.17.0 NETWORK_ISOLATION: 'true' NETWORK_ISOLATION_INSTANCE_NAME: crc NETWORK_ISOLATION_IPV4: 'true' NETWORK_ISOLATION_IPV4_ADDRESS: 172.16.1.1/24 NETWORK_ISOLATION_IPV4_NAT: 'true' NETWORK_ISOLATION_IPV6: 'false' NETWORK_ISOLATION_IPV6_ADDRESS: fd00:aaaa::1/64 NETWORK_ISOLATION_IP_ADDRESS: 192.168.122.10 NETWORK_ISOLATION_MAC: '52:54:00:11:11:10' NETWORK_ISOLATION_NETWORK_NAME: net-iso NETWORK_ISOLATION_NET_NAME: default NETWORK_ISOLATION_USE_DEFAULT_NETWORK: 'true' NETWORK_MTU: '1500' NETWORK_STORAGEMGMT_ADDRESS_PREFIX: 172.20.0 NETWORK_STORAGE_ADDRESS_PREFIX: 172.18.0 NETWORK_STORAGE_MACVLAN: '' NETWORK_TENANT_ADDRESS_PREFIX: 172.19.0 NETWORK_VLAN_START: '20' NETWORK_VLAN_STEP: '1' NEUTRONAPI: config/samples/neutron_v1beta1_neutronapi.yaml NEUTRONAPI_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/neutron-operator/config/samples/neutron_v1beta1_neutronapi.yaml NEUTRONAPI_DEPL_IMG: unused NEUTRON_BRANCH: main NEUTRON_COMMIT_HASH: '' NEUTRON_IMG: quay.io/openstack-k8s-operators/neutron-operator-index:latest NEUTRON_KUTTL_CONF: /home/zuul/ci-framework-data/artifacts/manifests/operator/neutron-operator/kuttl-test.yaml NEUTRON_KUTTL_DIR: /home/zuul/ci-framework-data/artifacts/manifests/operator/neutron-operator/test/kuttl/tests NEUTRON_KUTTL_NAMESPACE: neutron-kuttl-tests NEUTRON_REPO: https://github.com/openstack-k8s-operators/neutron-operator.git NFS_HOME: /home/nfs NMSTATE_NAMESPACE: openshift-nmstate NMSTATE_OPERATOR_GROUP: openshift-nmstate-tn6k8 NMSTATE_SUBSCRIPTION: kubernetes-nmstate-operator NNCP_ADDITIONAL_HOST_ROUTES: '' NNCP_BGP_1_INTERFACE: enp7s0 NNCP_BGP_1_IP_ADDRESS: 100.65.4.2 NNCP_BGP_2_INTERFACE: enp8s0 NNCP_BGP_2_IP_ADDRESS: 100.64.4.2 NNCP_BRIDGE: ospbr NNCP_CLEANUP_TIMEOUT: 120s NNCP_CTLPLANE_IPV6_ADDRESS_PREFIX: 'fd00:aaaa::' NNCP_CTLPLANE_IPV6_ADDRESS_SUFFIX: '10' NNCP_CTLPLANE_IP_ADDRESS_PREFIX: 192.168.122 NNCP_CTLPLANE_IP_ADDRESS_SUFFIX: '10' NNCP_DNS_SERVER: 192.168.122.1 NNCP_DNS_SERVER_IPV6: fd00:aaaa::1 NNCP_GATEWAY: 192.168.122.1 NNCP_GATEWAY_IPV6: fd00:aaaa::1 NNCP_INTERFACE: enp6s0 NNCP_NODES: '' NNCP_TIMEOUT: 240s NOVA: config/samples/nova_v1beta1_nova_collapsed_cell.yaml NOVA_BRANCH: main NOVA_COMMIT_HASH: '' NOVA_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/nova-operator/config/samples/nova_v1beta1_nova_collapsed_cell.yaml NOVA_IMG: quay.io/openstack-k8s-operators/nova-operator-index:latest NOVA_REPO: https://github.com/openstack-k8s-operators/nova-operator.git NUMBER_OF_INSTANCES: '1' OCP_NETWORK_NAME: crc OCTAVIA: config/samples/octavia_v1beta1_octavia.yaml OCTAVIA_BRANCH: main OCTAVIA_COMMIT_HASH: '' OCTAVIA_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/octavia-operator/config/samples/octavia_v1beta1_octavia.yaml OCTAVIA_IMG: quay.io/openstack-k8s-operators/octavia-operator-index:latest OCTAVIA_KUTTL_CONF: /home/zuul/ci-framework-data/artifacts/manifests/operator/octavia-operator/kuttl-test.yaml OCTAVIA_KUTTL_DIR: /home/zuul/ci-framework-data/artifacts/manifests/operator/octavia-operator/test/kuttl/tests OCTAVIA_KUTTL_NAMESPACE: octavia-kuttl-tests OCTAVIA_REPO: https://github.com/openstack-k8s-operators/octavia-operator.git OKD: 'false' OPENSTACK_BRANCH: main OPENSTACK_BUNDLE_IMG: quay.io/openstack-k8s-operators/openstack-operator-bundle:latest OPENSTACK_COMMIT_HASH: '' OPENSTACK_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/openstack-operator/config/samples/core_v1beta1_openstackcontrolplane_galera_network_isolation.yaml OPENSTACK_CRDS_DIR: openstack_crds OPENSTACK_CTLPLANE: config/samples/core_v1beta1_openstackcontrolplane_galera_network_isolation.yaml OPENSTACK_IMG: quay.io/openstack-k8s-operators/openstack-operator-index:latest OPENSTACK_K8S_BRANCH: main OPENSTACK_K8S_TAG: latest OPENSTACK_KUTTL_CONF: /home/zuul/ci-framework-data/artifacts/manifests/operator/openstack-operator/kuttl-test.yaml OPENSTACK_KUTTL_DIR: /home/zuul/ci-framework-data/artifacts/manifests/operator/openstack-operator/test/kuttl/tests OPENSTACK_KUTTL_NAMESPACE: openstack-kuttl-tests OPENSTACK_NEUTRON_CUSTOM_CONF: '' OPENSTACK_REPO: https://github.com/openstack-k8s-operators/openstack-operator.git OPENSTACK_STORAGE_BUNDLE_IMG: quay.io/openstack-k8s-operators/openstack-operator-storage-bundle:latest OPERATOR_BASE_DIR: /home/zuul/ci-framework-data/artifacts/manifests/operator OPERATOR_CHANNEL: '' OPERATOR_NAMESPACE: openstack-operators OPERATOR_SOURCE: '' OPERATOR_SOURCE_NAMESPACE: '' OUT: /home/zuul/ci-framework-data/artifacts/manifests OUTPUT_DIR: /home/zuul/ci-framework-data/artifacts/edpm OVNCONTROLLER: config/samples/ovn_v1beta1_ovncontroller.yaml OVNCONTROLLER_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/ovn-operator/config/samples/ovn_v1beta1_ovncontroller.yaml OVNCONTROLLER_NMAP: 'true' OVNDBS: config/samples/ovn_v1beta1_ovndbcluster.yaml OVNDBS_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/ovn-operator/config/samples/ovn_v1beta1_ovndbcluster.yaml OVNNORTHD: config/samples/ovn_v1beta1_ovnnorthd.yaml OVNNORTHD_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/ovn-operator/config/samples/ovn_v1beta1_ovnnorthd.yaml OVN_BRANCH: main OVN_COMMIT_HASH: '' OVN_IMG: quay.io/openstack-k8s-operators/ovn-operator-index:latest OVN_KUTTL_CONF: /home/zuul/ci-framework-data/artifacts/manifests/operator/ovn-operator/kuttl-test.yaml OVN_KUTTL_DIR: /home/zuul/ci-framework-data/artifacts/manifests/operator/ovn-operator/test/kuttl/tests OVN_KUTTL_NAMESPACE: ovn-kuttl-tests OVN_REPO: https://github.com/openstack-k8s-operators/ovn-operator.git PASSWORD: '12**********78' PLACEMENTAPI: config/samples/placement_v1beta1_placementapi.yaml PLACEMENTAPI_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/placement-operator/config/samples/placement_v1beta1_placementapi.yaml PLACEMENTAPI_DEPL_IMG: unused PLACEMENT_BRANCH: main PLACEMENT_COMMIT_HASH: '' PLACEMENT_IMG: quay.io/openstack-k8s-operators/placement-operator-index:latest PLACEMENT_KUTTL_CONF: /home/zuul/ci-framework-data/artifacts/manifests/operator/placement-operator/kuttl-test.yaml PLACEMENT_KUTTL_DIR: /home/zuul/ci-framework-data/artifacts/manifests/operator/placement-operator/test/kuttl/tests PLACEMENT_KUTTL_NAMESPACE: placement-kuttl-tests PLACEMENT_REPO: https://github.com/openstack-k8s-operators/placement-operator.git PULL_SECRET: /home/zuul/pull-secret.txt RABBITMQ: docs/examples/default-security-context/rabbitmq.yaml RABBITMQ_BRANCH: patches RABBITMQ_COMMIT_HASH: '' RABBITMQ_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/rabbitmq-operator/docs/examples/default-security-context/rabbitmq.yaml RABBITMQ_DEPL_IMG: unused RABBITMQ_IMG: quay.io/openstack-k8s-operators/rabbitmq-cluster-operator-index:latest RABBITMQ_REPO: https://github.com/openstack-k8s-operators/rabbitmq-cluster-operator.git REDHAT_OPERATORS: 'false' REDIS: config/samples/redis_v1beta1_redis.yaml REDIS_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/infra-operator-redis/config/samples/redis_v1beta1_redis.yaml REDIS_DEPL_IMG: unused RH_REGISTRY_PWD: '' RH_REGISTRY_USER: '' SECRET: os**********et SG_CORE_DEPL_IMG: unused STANDALONE_COMPUTE_DRIVER: libvirt STANDALONE_EXTERNAL_NET_PREFFIX: 172.21.0 STANDALONE_INTERNALAPI_NET_PREFIX: 172.17.0 STANDALONE_STORAGEMGMT_NET_PREFIX: 172.20.0 STANDALONE_STORAGE_NET_PREFIX: 172.18.0 STANDALONE_TENANT_NET_PREFIX: 172.19.0 STORAGEMGMT_HOST_ROUTES: '' STORAGE_CLASS: local-storage STORAGE_HOST_ROUTES: '' SWIFT: config/samples/swift_v1beta1_swift.yaml SWIFT_BRANCH: main SWIFT_COMMIT_HASH: '' SWIFT_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/swift-operator/config/samples/swift_v1beta1_swift.yaml SWIFT_IMG: quay.io/openstack-k8s-operators/swift-operator-index:latest SWIFT_KUTTL_CONF: /home/zuul/ci-framework-data/artifacts/manifests/operator/swift-operator/kuttl-test.yaml SWIFT_KUTTL_DIR: /home/zuul/ci-framework-data/artifacts/manifests/operator/swift-operator/test/kuttl/tests SWIFT_KUTTL_NAMESPACE: swift-kuttl-tests SWIFT_REPO: https://github.com/openstack-k8s-operators/swift-operator.git TELEMETRY: config/samples/telemetry_v1beta1_telemetry.yaml TELEMETRY_BRANCH: main TELEMETRY_COMMIT_HASH: '' TELEMETRY_CR: /home/zuul/ci-framework-data/artifacts/manifests/operator/telemetry-operator/config/samples/telemetry_v1beta1_telemetry.yaml TELEMETRY_IMG: quay.io/openstack-k8s-operators/telemetry-operator-index:latest TELEMETRY_KUTTL_BASEDIR: /home/zuul/ci-framework-data/artifacts/manifests/operator/telemetry-operator TELEMETRY_KUTTL_CONF: /home/zuul/ci-framework-data/artifacts/manifests/operator/telemetry-operator/kuttl-test.yaml TELEMETRY_KUTTL_DIR: /home/zuul/ci-framework-data/artifacts/manifests/operator/telemetry-operator/test/kuttl/suites TELEMETRY_KUTTL_NAMESPACE: telemetry-kuttl-tests TELEMETRY_KUTTL_RELPATH: test/kuttl/suites TELEMETRY_REPO: https://github.com/openstack-k8s-operators/telemetry-operator.git TENANT_HOST_ROUTES: '' TIMEOUT: 300s TLS_ENABLED: 'false' WATCHER_BRANCH: '' WATCHER_REPO: /home/zuul/src/github.com/openstack-k8s-operators/watcher-operator tripleo_deploy: 'export REGISTRY_USER:' 2026-01-22 06:45:41,387 p=31191 u=zuul n=ansible | TASK [install_yamls : Generate make targets install_yamls_path={{ cifmw_install_yamls_repo }}, output_directory={{ cifmw_install_yamls_tasks_out }}] *** 2026-01-22 06:45:41,387 p=31191 u=zuul n=ansible | Thursday 22 January 2026 06:45:41 +0000 (0:00:00.043) 0:01:02.939 ****** 2026-01-22 06:45:41,387 p=31191 u=zuul n=ansible | Thursday 22 January 2026 06:45:41 +0000 (0:00:00.043) 0:01:02.938 ****** 2026-01-22 06:45:41,765 p=31191 u=zuul n=ansible | ok: [controller] 2026-01-22 06:45:41,775 p=31191 u=zuul n=ansible | TASK [install_yamls : Debug generate_make module var=cifmw_generate_makes] ***** 2026-01-22 06:45:41,775 p=31191 u=zuul n=ansible | Thursday 22 January 2026 06:45:41 +0000 (0:00:00.387) 0:01:03.327 ****** 2026-01-22 06:45:41,775 p=31191 u=zuul n=ansible | Thursday 22 January 2026 06:45:41 +0000 (0:00:00.387) 0:01:03.325 ****** 2026-01-22 06:45:41,811 p=31191 u=zuul n=ansible | ok: [controller] => cifmw_generate_makes: changed: false debug: /home/zuul/src/github.com/openstack-k8s-operators/install_yamls/Makefile: - all - help - cleanup - deploy_cleanup - wait - crc_storage - crc_storage_cleanup - crc_storage_release - crc_storage_with_retries - crc_storage_cleanup_with_retries - operator_namespace - namespace - namespace_cleanup - input - input_cleanup - crc_bmo_setup - crc_bmo_cleanup - openstack_prep - openstack - openstack_wait - openstack_init - openstack_cleanup - openstack_repo - openstack_deploy_prep - openstack_deploy - openstack_wait_deploy - openstack_deploy_cleanup - openstack_update_run - update_services - update_system - openstack_patch_version - edpm_deploy_generate_keys - edpm_patch_ansible_runner_image - edpm_deploy_prep - edpm_deploy_cleanup - edpm_deploy - edpm_deploy_baremetal_prep - edpm_deploy_baremetal - edpm_wait_deploy_baremetal - edpm_wait_deploy - edpm_register_dns - edpm_nova_discover_hosts - openstack_crds - openstack_crds_cleanup - edpm_deploy_networker_prep - edpm_deploy_networker_cleanup - edpm_deploy_networker - infra_prep - infra - infra_cleanup - dns_deploy_prep - dns_deploy - dns_deploy_cleanup - netconfig_deploy_prep - netconfig_deploy - netconfig_deploy_cleanup - memcached_deploy_prep - memcached_deploy - memcached_deploy_cleanup - keystone_prep - keystone - keystone_cleanup - keystone_deploy_prep - keystone_deploy - keystone_deploy_cleanup - barbican_prep - barbican - barbican_cleanup - barbican_deploy_prep - barbican_deploy - barbican_deploy_validate - barbican_deploy_cleanup - mariadb - mariadb_cleanup - mariadb_deploy_prep - mariadb_deploy - mariadb_deploy_cleanup - placement_prep - placement - placement_cleanup - placement_deploy_prep - placement_deploy - placement_deploy_cleanup - glance_prep - glance - glance_cleanup - glance_deploy_prep - glance_deploy - glance_deploy_cleanup - ovn_prep - ovn - ovn_cleanup - ovn_deploy_prep - ovn_deploy - ovn_deploy_cleanup - neutron_prep - neutron - neutron_cleanup - neutron_deploy_prep - neutron_deploy - neutron_deploy_cleanup - cinder_prep - cinder - cinder_cleanup - cinder_deploy_prep - cinder_deploy - cinder_deploy_cleanup - rabbitmq_prep - rabbitmq - rabbitmq_cleanup - rabbitmq_deploy_prep - rabbitmq_deploy - rabbitmq_deploy_cleanup - ironic_prep - ironic - ironic_cleanup - ironic_deploy_prep - ironic_deploy - ironic_deploy_cleanup - octavia_prep - octavia - octavia_cleanup - octavia_deploy_prep - octavia_deploy - octavia_deploy_cleanup - designate_prep - designate - designate_cleanup - designate_deploy_prep - designate_deploy - designate_deploy_cleanup - nova_prep - nova - nova_cleanup - nova_deploy_prep - nova_deploy - nova_deploy_cleanup - mariadb_kuttl_run - mariadb_kuttl - kuttl_db_prep - kuttl_db_cleanup - kuttl_common_prep - kuttl_common_cleanup - keystone_kuttl_run - keystone_kuttl - barbican_kuttl_run - barbican_kuttl - placement_kuttl_run - placement_kuttl - cinder_kuttl_run - cinder_kuttl - neutron_kuttl_run - neutron_kuttl - octavia_kuttl_run - octavia_kuttl - designate_kuttl - designate_kuttl_run - ovn_kuttl_run - ovn_kuttl - infra_kuttl_run - infra_kuttl - ironic_kuttl_run - ironic_kuttl - ironic_kuttl_crc - heat_kuttl_run - heat_kuttl - heat_kuttl_crc - ansibleee_kuttl_run - ansibleee_kuttl_cleanup - ansibleee_kuttl_prep - ansibleee_kuttl - glance_kuttl_run - glance_kuttl - manila_kuttl_run - manila_kuttl - swift_kuttl_run - swift_kuttl - horizon_kuttl_run - horizon_kuttl - openstack_kuttl_run - openstack_kuttl - mariadb_chainsaw_run - mariadb_chainsaw - horizon_prep - horizon - horizon_cleanup - horizon_deploy_prep - horizon_deploy - horizon_deploy_cleanup - heat_prep - heat - heat_cleanup - heat_deploy_prep - heat_deploy - heat_deploy_cleanup - ansibleee_prep - ansibleee - ansibleee_cleanup - baremetal_prep - baremetal - baremetal_cleanup - ceph_help - ceph - ceph_cleanup - rook_prep - rook - rook_deploy_prep - rook_deploy - rook_crc_disk - rook_cleanup - lvms - nmstate - nncp - nncp_cleanup - netattach - netattach_cleanup - metallb - metallb_config - metallb_config_cleanup - metallb_cleanup - loki - loki_cleanup - loki_deploy - loki_deploy_cleanup - netobserv - netobserv_cleanup - netobserv_deploy - netobserv_deploy_cleanup - manila_prep - manila - manila_cleanup - manila_deploy_prep - manila_deploy - manila_deploy_cleanup - telemetry_prep - telemetry - telemetry_cleanup - telemetry_deploy_prep - telemetry_deploy - telemetry_deploy_cleanup - telemetry_kuttl_run - telemetry_kuttl - swift_prep - swift - swift_cleanup - swift_deploy_prep - swift_deploy - swift_deploy_cleanup - certmanager - certmanager_cleanup - validate_marketplace - redis_deploy_prep - redis_deploy - redis_deploy_cleanup - set_slower_etcd_profile /home/zuul/src/github.com/openstack-k8s-operators/install_yamls/devsetup/Makefile: - help - download_tools - nfs - nfs_cleanup - crc - crc_cleanup - crc_scrub - crc_attach_default_interface - crc_attach_default_interface_cleanup - ipv6_lab_network - ipv6_lab_network_cleanup - ipv6_lab_nat64_router - ipv6_lab_nat64_router_cleanup - ipv6_lab_sno - ipv6_lab_sno_cleanup - ipv6_lab - ipv6_lab_cleanup - attach_default_interface - attach_default_interface_cleanup - network_isolation_bridge - network_isolation_bridge_cleanup - edpm_baremetal_compute - edpm_compute - edpm_compute_bootc - edpm_ansible_runner - edpm_computes_bgp - edpm_compute_repos - edpm_compute_cleanup - edpm_networker - edpm_networker_cleanup - edpm_deploy_instance - tripleo_deploy - standalone_deploy - standalone_sync - standalone - standalone_cleanup - standalone_snapshot - standalone_revert - cifmw_prepare - cifmw_cleanup - bmaas_network - bmaas_network_cleanup - bmaas_route_crc_and_crc_bmaas_networks - bmaas_route_crc_and_crc_bmaas_networks_cleanup - bmaas_crc_attach_network - bmaas_crc_attach_network_cleanup - bmaas_crc_baremetal_bridge - bmaas_crc_baremetal_bridge_cleanup - bmaas_baremetal_net_nad - bmaas_baremetal_net_nad_cleanup - bmaas_metallb - bmaas_metallb_cleanup - bmaas_virtual_bms - bmaas_virtual_bms_cleanup - bmaas_sushy_emulator - bmaas_sushy_emulator_cleanup - bmaas_sushy_emulator_wait - bmaas_generate_nodes_yaml - bmaas - bmaas_cleanup failed: false success: true 2026-01-22 06:45:41,820 p=31191 u=zuul n=ansible | TASK [install_yamls : Create the install_yamls parameters file dest={{ cifmw_basedir }}/artifacts/parameters/install-yamls-params.yml, content={{ { 'cifmw_install_yamls_environment': cifmw_install_yamls_environment, 'cifmw_install_yamls_defaults': cifmw_install_yamls_defaults } | to_nice_yaml }}, mode=0644] *** 2026-01-22 06:45:41,820 p=31191 u=zuul n=ansible | Thursday 22 January 2026 06:45:41 +0000 (0:00:00.045) 0:01:03.372 ****** 2026-01-22 06:45:41,820 p=31191 u=zuul n=ansible | Thursday 22 January 2026 06:45:41 +0000 (0:00:00.045) 0:01:03.371 ****** 2026-01-22 06:45:42,549 p=31191 u=zuul n=ansible | changed: [controller] 2026-01-22 06:45:42,557 p=31191 u=zuul n=ansible | TASK [install_yamls : Create empty cifmw_install_yamls_environment if needed cifmw_install_yamls_environment={}] *** 2026-01-22 06:45:42,557 p=31191 u=zuul n=ansible | Thursday 22 January 2026 06:45:42 +0000 (0:00:00.736) 0:01:04.109 ****** 2026-01-22 06:45:42,557 p=31191 u=zuul n=ansible | Thursday 22 January 2026 06:45:42 +0000 (0:00:00.736) 0:01:04.107 ****** 2026-01-22 06:45:42,582 p=31191 u=zuul n=ansible | skipping: [controller] 2026-01-22 06:45:42,597 p=31191 u=zuul n=ansible | TASK [discover_latest_image : Get latest image url={{ cifmw_discover_latest_image_base_url }}, image_prefix={{ cifmw_discover_latest_image_qcow_prefix }}, images_file={{ cifmw_discover_latest_image_images_file }}] *** 2026-01-22 06:45:42,597 p=31191 u=zuul n=ansible | Thursday 22 January 2026 06:45:42 +0000 (0:00:00.040) 0:01:04.149 ****** 2026-01-22 06:45:42,597 p=31191 u=zuul n=ansible | Thursday 22 January 2026 06:45:42 +0000 (0:00:00.040) 0:01:04.147 ****** 2026-01-22 06:45:43,102 p=31191 u=zuul n=ansible | changed: [controller] 2026-01-22 06:45:43,109 p=31191 u=zuul n=ansible | TASK [discover_latest_image : Export facts accordingly cifmw_discovered_image_name={{ discovered_image['data']['image_name'] }}, cifmw_discovered_image_url={{ discovered_image['data']['image_url'] }}, cifmw_discovered_hash={{ discovered_image['data']['hash'] }}, cifmw_discovered_hash_algorithm={{ discovered_image['data']['hash_algorithm'] }}, cacheable=True] *** 2026-01-22 06:45:43,109 p=31191 u=zuul n=ansible | Thursday 22 January 2026 06:45:43 +0000 (0:00:00.512) 0:01:04.661 ****** 2026-01-22 06:45:43,109 p=31191 u=zuul n=ansible | Thursday 22 January 2026 06:45:43 +0000 (0:00:00.512) 0:01:04.660 ****** 2026-01-22 06:45:43,134 p=31191 u=zuul n=ansible | ok: [controller] 2026-01-22 06:45:43,146 p=31191 u=zuul n=ansible | TASK [cifmw_setup : Create artifacts with custom params mode=0644, dest={{ cifmw_basedir }}/artifacts/parameters/custom-params.yml, content={{ ci_framework_params | to_nice_yaml }}] *** 2026-01-22 06:45:43,146 p=31191 u=zuul n=ansible | Thursday 22 January 2026 06:45:43 +0000 (0:00:00.037) 0:01:04.698 ****** 2026-01-22 06:45:43,146 p=31191 u=zuul n=ansible | Thursday 22 January 2026 06:45:43 +0000 (0:00:00.037) 0:01:04.697 ****** 2026-01-22 06:45:43,833 p=31191 u=zuul n=ansible | changed: [controller] 2026-01-22 06:45:43,867 p=31191 u=zuul n=ansible | PLAY [Install dev tools] ******************************************************* 2026-01-22 06:45:43,884 p=31191 u=zuul n=ansible | TASK [Assert that operator_name is set that=['operator_name is defined']] ****** 2026-01-22 06:45:43,884 p=31191 u=zuul n=ansible | Thursday 22 January 2026 06:45:43 +0000 (0:00:00.738) 0:01:05.436 ****** 2026-01-22 06:45:43,884 p=31191 u=zuul n=ansible | Thursday 22 January 2026 06:45:43 +0000 (0:00:00.738) 0:01:05.435 ****** 2026-01-22 06:45:43,906 p=31191 u=zuul n=ansible | ok: [controller] => changed: false msg: All assertions passed 2026-01-22 06:45:43,915 p=31191 u=zuul n=ansible | TASK [Download install_yamls deps name=install_yamls_makes, tasks_from=make_download_tools] *** 2026-01-22 06:45:43,915 p=31191 u=zuul n=ansible | Thursday 22 January 2026 06:45:43 +0000 (0:00:00.030) 0:01:05.467 ****** 2026-01-22 06:45:43,915 p=31191 u=zuul n=ansible | Thursday 22 January 2026 06:45:43 +0000 (0:00:00.030) 0:01:05.466 ****** 2026-01-22 06:45:43,953 p=31191 u=zuul n=ansible | TASK [install_yamls_makes : Debug make_download_tools_env var=make_download_tools_env] *** 2026-01-22 06:45:43,953 p=31191 u=zuul n=ansible | Thursday 22 January 2026 06:45:43 +0000 (0:00:00.038) 0:01:05.505 ****** 2026-01-22 06:45:43,954 p=31191 u=zuul n=ansible | Thursday 22 January 2026 06:45:43 +0000 (0:00:00.038) 0:01:05.504 ****** 2026-01-22 06:45:43,976 p=31191 u=zuul n=ansible | skipping: [controller] 2026-01-22 06:45:43,983 p=31191 u=zuul n=ansible | TASK [install_yamls_makes : Debug make_download_tools_params var=make_download_tools_params] *** 2026-01-22 06:45:43,983 p=31191 u=zuul n=ansible | Thursday 22 January 2026 06:45:43 +0000 (0:00:00.030) 0:01:05.535 ****** 2026-01-22 06:45:43,984 p=31191 u=zuul n=ansible | Thursday 22 January 2026 06:45:43 +0000 (0:00:00.030) 0:01:05.534 ****** 2026-01-22 06:45:44,006 p=31191 u=zuul n=ansible | skipping: [controller] 2026-01-22 06:45:44,015 p=31191 u=zuul n=ansible | TASK [install_yamls_makes : Run download_tools output_dir={{ cifmw_basedir }}/artifacts, chdir=/home/zuul/src/github.com/openstack-k8s-operators/install_yamls/devsetup, script=make download_tools, dry_run={{ make_download_tools_dryrun|default(false)|bool }}, extra_args={{ dict((make_download_tools_env|default({})), **(make_download_tools_params|default({}))) }}] *** 2026-01-22 06:45:44,016 p=31191 u=zuul n=ansible | Thursday 22 January 2026 06:45:44 +0000 (0:00:00.032) 0:01:05.567 ****** 2026-01-22 06:45:44,016 p=31191 u=zuul n=ansible | Thursday 22 January 2026 06:45:44 +0000 (0:00:00.032) 0:01:05.566 ****** 2026-01-22 06:45:44,139 p=31191 u=zuul n=ansible | Follow script's output here: /home/zuul/ci-framework-data/logs/ci_script_000_run_download.log 2026-01-22 06:46:41,973 p=31191 u=zuul n=ansible | [WARNING]: conditional statements should not include jinja2 templating delimiters such as {{ }} or {% %}. Found: {{ make_download_tools_until | default(true) }} 2026-01-22 06:46:41,975 p=31191 u=zuul n=ansible | changed: [controller] 2026-01-22 06:46:41,988 p=31191 u=zuul n=ansible | TASK [run_hook : Assert parameters are valid quiet=True, that=['_list_hooks is not string', '_list_hooks is not mapping', '_list_hooks is iterable', '(hooks | default([])) is not string', '(hooks | default([])) is not mapping', '(hooks | default([])) is iterable']] *** 2026-01-22 06:46:41,988 p=31191 u=zuul n=ansible | Thursday 22 January 2026 06:46:41 +0000 (0:00:57.972) 0:02:03.540 ****** 2026-01-22 06:46:41,988 p=31191 u=zuul n=ansible | Thursday 22 January 2026 06:46:41 +0000 (0:00:57.972) 0:02:03.538 ****** 2026-01-22 06:46:42,044 p=31191 u=zuul n=ansible | ok: [controller] 2026-01-22 06:46:42,054 p=31191 u=zuul n=ansible | TASK [run_hook : Assert single hooks are all mappings quiet=True, that=['_not_mapping_hooks | length == 0'], msg=All single hooks must be a list of mappings or a mapping.] *** 2026-01-22 06:46:42,054 p=31191 u=zuul n=ansible | Thursday 22 January 2026 06:46:42 +0000 (0:00:00.066) 0:02:03.606 ****** 2026-01-22 06:46:42,054 p=31191 u=zuul n=ansible | Thursday 22 January 2026 06:46:42 +0000 (0:00:00.066) 0:02:03.604 ****** 2026-01-22 06:46:42,137 p=31191 u=zuul n=ansible | ok: [controller] 2026-01-22 06:46:42,145 p=31191 u=zuul n=ansible | TASK [run_hook : Loop on hooks for pre_infra _raw_params={{ hook.type }}.yml] *** 2026-01-22 06:46:42,145 p=31191 u=zuul n=ansible | Thursday 22 January 2026 06:46:42 +0000 (0:00:00.091) 0:02:03.697 ****** 2026-01-22 06:46:42,145 p=31191 u=zuul n=ansible | Thursday 22 January 2026 06:46:42 +0000 (0:00:00.091) 0:02:03.696 ****** 2026-01-22 06:46:42,244 p=31191 u=zuul n=ansible | included: /home/zuul/src/github.com/openstack-k8s-operators/ci-framework/roles/run_hook/tasks/playbook.yml for controller => (item={'name': 'Download needed tools', 'inventory': 'localhost,', 'connection': 'local', 'type': 'playbook', 'source': '/home/zuul/src/github.com/openstack-k8s-operators/install_yamls/devsetup/download_tools.yaml'}) 2026-01-22 06:46:42,252 p=31191 u=zuul n=ansible | TASK [run_hook : Set playbook path for Download needed tools cifmw_basedir={{ _bdir }}, hook_name={{ _hook_name }}, playbook_path={{ _play | realpath }}, log_path={{ _bdir }}/logs/{{ step }}_{{ _hook_name }}.log, extra_vars=-e namespace={{ cifmw_openstack_namespace }} {%- if hook.extra_vars is defined and hook.extra_vars|length > 0 -%} {% for key,value in hook.extra_vars.items() -%} {%- if key == 'file' %} -e "@{{ value }}" {%- else %} -e "{{ key }}={{ value }}" {%- endif %} {%- endfor %} {%- endif %}] *** 2026-01-22 06:46:42,252 p=31191 u=zuul n=ansible | Thursday 22 January 2026 06:46:42 +0000 (0:00:00.107) 0:02:03.804 ****** 2026-01-22 06:46:42,252 p=31191 u=zuul n=ansible | Thursday 22 January 2026 06:46:42 +0000 (0:00:00.107) 0:02:03.803 ****** 2026-01-22 06:46:42,291 p=31191 u=zuul n=ansible | ok: [controller] 2026-01-22 06:46:42,298 p=31191 u=zuul n=ansible | TASK [run_hook : Get file stat path={{ playbook_path }}] *********************** 2026-01-22 06:46:42,298 p=31191 u=zuul n=ansible | Thursday 22 January 2026 06:46:42 +0000 (0:00:00.045) 0:02:03.850 ****** 2026-01-22 06:46:42,298 p=31191 u=zuul n=ansible | Thursday 22 January 2026 06:46:42 +0000 (0:00:00.045) 0:02:03.849 ****** 2026-01-22 06:46:42,523 p=31191 u=zuul n=ansible | ok: [controller] 2026-01-22 06:46:42,532 p=31191 u=zuul n=ansible | TASK [run_hook : Fail if playbook doesn't exist msg=Playbook {{ playbook_path }} doesn't seem to exist.] *** 2026-01-22 06:46:42,532 p=31191 u=zuul n=ansible | Thursday 22 January 2026 06:46:42 +0000 (0:00:00.233) 0:02:04.084 ****** 2026-01-22 06:46:42,532 p=31191 u=zuul n=ansible | Thursday 22 January 2026 06:46:42 +0000 (0:00:00.233) 0:02:04.082 ****** 2026-01-22 06:46:42,554 p=31191 u=zuul n=ansible | skipping: [controller] 2026-01-22 06:46:42,575 p=31191 u=zuul n=ansible | TASK [run_hook : Get parameters files paths={{ (cifmw_basedir, 'artifacts/parameters') | path_join }}, file_type=file, patterns=*.yml] *** 2026-01-22 06:46:42,575 p=31191 u=zuul n=ansible | Thursday 22 January 2026 06:46:42 +0000 (0:00:00.043) 0:02:04.127 ****** 2026-01-22 06:46:42,575 p=31191 u=zuul n=ansible | Thursday 22 January 2026 06:46:42 +0000 (0:00:00.043) 0:02:04.126 ****** 2026-01-22 06:46:42,911 p=31191 u=zuul n=ansible | ok: [controller] 2026-01-22 06:46:42,927 p=31191 u=zuul n=ansible | TASK [run_hook : Add parameters artifacts as extra variables extra_vars={{ extra_vars }} {% for file in cifmw_run_hook_parameters_files.files %} -e "@{{ file.path }}" {%- endfor %}] *** 2026-01-22 06:46:42,927 p=31191 u=zuul n=ansible | Thursday 22 January 2026 06:46:42 +0000 (0:00:00.352) 0:02:04.479 ****** 2026-01-22 06:46:42,927 p=31191 u=zuul n=ansible | Thursday 22 January 2026 06:46:42 +0000 (0:00:00.352) 0:02:04.478 ****** 2026-01-22 06:46:42,946 p=31191 u=zuul n=ansible | ok: [controller] 2026-01-22 06:46:42,953 p=31191 u=zuul n=ansible | TASK [run_hook : Ensure log directory exists path={{ log_path | dirname }}, state=directory, mode=0755] *** 2026-01-22 06:46:42,953 p=31191 u=zuul n=ansible | Thursday 22 January 2026 06:46:42 +0000 (0:00:00.025) 0:02:04.505 ****** 2026-01-22 06:46:42,953 p=31191 u=zuul n=ansible | Thursday 22 January 2026 06:46:42 +0000 (0:00:00.025) 0:02:04.504 ****** 2026-01-22 06:46:43,197 p=31191 u=zuul n=ansible | ok: [controller] 2026-01-22 06:46:43,204 p=31191 u=zuul n=ansible | TASK [run_hook : Ensure artifacts directory exists path={{ cifmw_basedir }}/artifacts, state=directory, mode=0755] *** 2026-01-22 06:46:43,204 p=31191 u=zuul n=ansible | Thursday 22 January 2026 06:46:43 +0000 (0:00:00.250) 0:02:04.756 ****** 2026-01-22 06:46:43,204 p=31191 u=zuul n=ansible | Thursday 22 January 2026 06:46:43 +0000 (0:00:00.250) 0:02:04.755 ****** 2026-01-22 06:46:43,422 p=31191 u=zuul n=ansible | ok: [controller] 2026-01-22 06:46:43,430 p=31191 u=zuul n=ansible | TASK [run_hook : Run hook without retry - Download needed tools] *************** 2026-01-22 06:46:43,431 p=31191 u=zuul n=ansible | Thursday 22 January 2026 06:46:43 +0000 (0:00:00.226) 0:02:04.982 ****** 2026-01-22 06:46:43,431 p=31191 u=zuul n=ansible | Thursday 22 January 2026 06:46:43 +0000 (0:00:00.226) 0:02:04.981 ****** 2026-01-22 06:46:43,544 p=31191 u=zuul n=ansible | Follow script's output here: /home/zuul/ci-framework-data/logs/ci_script_001_run_hook_without_retry.log 2026-01-22 06:47:13,721 p=31191 u=zuul n=ansible | changed: [controller] 2026-01-22 06:47:13,736 p=31191 u=zuul n=ansible | TASK [run_hook : Run hook with retry - Download needed tools] ****************** 2026-01-22 06:47:13,737 p=31191 u=zuul n=ansible | Thursday 22 January 2026 06:47:13 +0000 (0:00:30.306) 0:02:35.288 ****** 2026-01-22 06:47:13,737 p=31191 u=zuul n=ansible | Thursday 22 January 2026 06:47:13 +0000 (0:00:30.306) 0:02:35.287 ****** 2026-01-22 06:47:13,755 p=31191 u=zuul n=ansible | skipping: [controller] 2026-01-22 06:47:13,765 p=31191 u=zuul n=ansible | TASK [run_hook : Check if we have a file path={{ cifmw_basedir }}/artifacts/{{ step }}_{{ hook_name }}.yml] *** 2026-01-22 06:47:13,765 p=31191 u=zuul n=ansible | Thursday 22 January 2026 06:47:13 +0000 (0:00:00.028) 0:02:35.317 ****** 2026-01-22 06:47:13,765 p=31191 u=zuul n=ansible | Thursday 22 January 2026 06:47:13 +0000 (0:00:00.028) 0:02:35.316 ****** 2026-01-22 06:47:13,995 p=31191 u=zuul n=ansible | ok: [controller] 2026-01-22 06:47:14,002 p=31191 u=zuul n=ansible | TASK [run_hook : Load generated content in main playbook file={{ cifmw_basedir }}/artifacts/{{ step }}_{{ hook_name }}.yml] *** 2026-01-22 06:47:14,002 p=31191 u=zuul n=ansible | Thursday 22 January 2026 06:47:14 +0000 (0:00:00.236) 0:02:35.553 ****** 2026-01-22 06:47:14,002 p=31191 u=zuul n=ansible | Thursday 22 January 2026 06:47:14 +0000 (0:00:00.236) 0:02:35.552 ****** 2026-01-22 06:47:14,017 p=31191 u=zuul n=ansible | skipping: [controller] 2026-01-22 06:47:14,049 p=31191 u=zuul n=ansible | PLAY [Prepare host virtualization] ********************************************* 2026-01-22 06:47:14,066 p=31191 u=zuul n=ansible | TASK [cifmw_setup : Load parameters files dir={{ cifmw_basedir }}/artifacts/parameters] *** 2026-01-22 06:47:14,066 p=31191 u=zuul n=ansible | Thursday 22 January 2026 06:47:14 +0000 (0:00:00.064) 0:02:35.618 ****** 2026-01-22 06:47:14,066 p=31191 u=zuul n=ansible | Thursday 22 January 2026 06:47:14 +0000 (0:00:00.064) 0:02:35.617 ****** 2026-01-22 06:47:14,169 p=31191 u=zuul n=ansible | ok: [controller] 2026-01-22 06:47:14,181 p=31191 u=zuul n=ansible | TASK [Ensure libvirt is present/configured name=libvirt_manager] *************** 2026-01-22 06:47:14,181 p=31191 u=zuul n=ansible | Thursday 22 January 2026 06:47:14 +0000 (0:00:00.114) 0:02:35.733 ****** 2026-01-22 06:47:14,181 p=31191 u=zuul n=ansible | Thursday 22 January 2026 06:47:14 +0000 (0:00:00.114) 0:02:35.732 ****** 2026-01-22 06:47:14,208 p=31191 u=zuul n=ansible | skipping: [controller] 2026-01-22 06:47:14,215 p=31191 u=zuul n=ansible | TASK [Perpare OpenShift provisioner node name=openshift_provisioner_node] ****** 2026-01-22 06:47:14,215 p=31191 u=zuul n=ansible | Thursday 22 January 2026 06:47:14 +0000 (0:00:00.034) 0:02:35.767 ****** 2026-01-22 06:47:14,215 p=31191 u=zuul n=ansible | Thursday 22 January 2026 06:47:14 +0000 (0:00:00.034) 0:02:35.766 ****** 2026-01-22 06:47:14,239 p=31191 u=zuul n=ansible | skipping: [controller] 2026-01-22 06:47:14,274 p=31191 u=zuul n=ansible | PLAY [Build dataset hook] ****************************************************** 2026-01-22 06:47:14,305 p=31191 u=zuul n=ansible | TASK [cifmw_setup : Load parameters files dir={{ cifmw_basedir }}/artifacts/parameters] *** 2026-01-22 06:47:14,305 p=31191 u=zuul n=ansible | Thursday 22 January 2026 06:47:14 +0000 (0:00:00.089) 0:02:35.857 ****** 2026-01-22 06:47:14,305 p=31191 u=zuul n=ansible | Thursday 22 January 2026 06:47:14 +0000 (0:00:00.089) 0:02:35.856 ****** 2026-01-22 06:47:14,417 p=31191 u=zuul n=ansible | ok: [controller] 2026-01-22 06:47:14,425 p=31191 u=zuul n=ansible | TASK [networking_mapper : Check for Networking Environment Definition file existence path={{ cifmw_networking_mapper_networking_env_def_path }}] *** 2026-01-22 06:47:14,425 p=31191 u=zuul n=ansible | Thursday 22 January 2026 06:47:14 +0000 (0:00:00.120) 0:02:35.977 ****** 2026-01-22 06:47:14,426 p=31191 u=zuul n=ansible | Thursday 22 January 2026 06:47:14 +0000 (0:00:00.120) 0:02:35.976 ****** 2026-01-22 06:47:15,149 p=31191 u=zuul n=ansible | ok: [controller] 2026-01-22 06:47:15,155 p=31191 u=zuul n=ansible | TASK [networking_mapper : Check for Networking Definition file existance that=['_net_env_def_stat.stat.exists'], msg=Ensure that the Networking Environment Definition file exists in {{ cifmw_networking_mapper_networking_env_def_path }}, quiet=True] *** 2026-01-22 06:47:15,155 p=31191 u=zuul n=ansible | Thursday 22 January 2026 06:47:15 +0000 (0:00:00.729) 0:02:36.707 ****** 2026-01-22 06:47:15,156 p=31191 u=zuul n=ansible | Thursday 22 January 2026 06:47:15 +0000 (0:00:00.729) 0:02:36.706 ****** 2026-01-22 06:47:15,190 p=31191 u=zuul n=ansible | skipping: [controller] 2026-01-22 06:47:15,199 p=31191 u=zuul n=ansible | TASK [networking_mapper : Load the Networking Definition from file path={{ cifmw_networking_mapper_networking_env_def_path }}] *** 2026-01-22 06:47:15,199 p=31191 u=zuul n=ansible | Thursday 22 January 2026 06:47:15 +0000 (0:00:00.043) 0:02:36.750 ****** 2026-01-22 06:47:15,199 p=31191 u=zuul n=ansible | Thursday 22 January 2026 06:47:15 +0000 (0:00:00.043) 0:02:36.749 ****** 2026-01-22 06:47:15,218 p=31191 u=zuul n=ansible | skipping: [controller] 2026-01-22 06:47:15,229 p=31191 u=zuul n=ansible | TASK [networking_mapper : Set cifmw_networking_env_definition is present cifmw_networking_env_definition={{ _net_env_def_slurp['content'] | b64decode | from_yaml }}, cacheable=True] *** 2026-01-22 06:47:15,229 p=31191 u=zuul n=ansible | Thursday 22 January 2026 06:47:15 +0000 (0:00:00.030) 0:02:36.781 ****** 2026-01-22 06:47:15,229 p=31191 u=zuul n=ansible | Thursday 22 January 2026 06:47:15 +0000 (0:00:00.030) 0:02:36.780 ****** 2026-01-22 06:47:15,257 p=31191 u=zuul n=ansible | skipping: [controller] 2026-01-22 06:47:15,277 p=31191 u=zuul n=ansible | TASK [Deploy OCP using Hive name=hive] ***************************************** 2026-01-22 06:47:15,277 p=31191 u=zuul n=ansible | Thursday 22 January 2026 06:47:15 +0000 (0:00:00.047) 0:02:36.829 ****** 2026-01-22 06:47:15,277 p=31191 u=zuul n=ansible | Thursday 22 January 2026 06:47:15 +0000 (0:00:00.047) 0:02:36.828 ****** 2026-01-22 06:47:15,301 p=31191 u=zuul n=ansible | skipping: [controller] 2026-01-22 06:47:15,313 p=31191 u=zuul n=ansible | TASK [Prepare CRC name=rhol_crc] *********************************************** 2026-01-22 06:47:15,313 p=31191 u=zuul n=ansible | Thursday 22 January 2026 06:47:15 +0000 (0:00:00.035) 0:02:36.865 ****** 2026-01-22 06:47:15,313 p=31191 u=zuul n=ansible | Thursday 22 January 2026 06:47:15 +0000 (0:00:00.035) 0:02:36.864 ****** 2026-01-22 06:47:15,337 p=31191 u=zuul n=ansible | skipping: [controller] 2026-01-22 06:47:15,348 p=31191 u=zuul n=ansible | TASK [Deploy OpenShift cluster using dev-scripts name=devscripts] ************** 2026-01-22 06:47:15,348 p=31191 u=zuul n=ansible | Thursday 22 January 2026 06:47:15 +0000 (0:00:00.035) 0:02:36.900 ****** 2026-01-22 06:47:15,348 p=31191 u=zuul n=ansible | Thursday 22 January 2026 06:47:15 +0000 (0:00:00.035) 0:02:36.899 ****** 2026-01-22 06:47:15,371 p=31191 u=zuul n=ansible | skipping: [controller] 2026-01-22 06:47:15,382 p=31191 u=zuul n=ansible | TASK [openshift_login : Ensure output directory exists path={{ cifmw_openshift_login_basedir }}/artifacts, state=directory, mode=0755] *** 2026-01-22 06:47:15,383 p=31191 u=zuul n=ansible | Thursday 22 January 2026 06:47:15 +0000 (0:00:00.034) 0:02:36.934 ****** 2026-01-22 06:47:15,383 p=31191 u=zuul n=ansible | Thursday 22 January 2026 06:47:15 +0000 (0:00:00.034) 0:02:36.933 ****** 2026-01-22 06:47:15,634 p=31191 u=zuul n=ansible | ok: [controller] 2026-01-22 06:47:15,648 p=31191 u=zuul n=ansible | TASK [openshift_login : OpenShift login _raw_params=login.yml] ***************** 2026-01-22 06:47:15,649 p=31191 u=zuul n=ansible | Thursday 22 January 2026 06:47:15 +0000 (0:00:00.265) 0:02:37.200 ****** 2026-01-22 06:47:15,649 p=31191 u=zuul n=ansible | Thursday 22 January 2026 06:47:15 +0000 (0:00:00.265) 0:02:37.199 ****** 2026-01-22 06:47:15,690 p=31191 u=zuul n=ansible | included: /home/zuul/src/github.com/openstack-k8s-operators/ci-framework/roles/openshift_login/tasks/login.yml for controller 2026-01-22 06:47:15,707 p=31191 u=zuul n=ansible | TASK [openshift_login : Check if the password file is present path={{ cifmw_openshift_login_password_file | default(cifmw_openshift_password_file) }}] *** 2026-01-22 06:47:15,707 p=31191 u=zuul n=ansible | Thursday 22 January 2026 06:47:15 +0000 (0:00:00.058) 0:02:37.259 ****** 2026-01-22 06:47:15,707 p=31191 u=zuul n=ansible | Thursday 22 January 2026 06:47:15 +0000 (0:00:00.058) 0:02:37.257 ****** 2026-01-22 06:47:15,729 p=31191 u=zuul n=ansible | skipping: [controller] 2026-01-22 06:47:15,739 p=31191 u=zuul n=ansible | TASK [openshift_login : Fetch user password content src={{ cifmw_openshift_login_password_file | default(cifmw_openshift_password_file) }}] *** 2026-01-22 06:47:15,739 p=31191 u=zuul n=ansible | Thursday 22 January 2026 06:47:15 +0000 (0:00:00.032) 0:02:37.291 ****** 2026-01-22 06:47:15,739 p=31191 u=zuul n=ansible | Thursday 22 January 2026 06:47:15 +0000 (0:00:00.032) 0:02:37.290 ****** 2026-01-22 06:47:15,759 p=31191 u=zuul n=ansible | skipping: [controller] 2026-01-22 06:47:15,769 p=31191 u=zuul n=ansible | TASK [openshift_login : Set user password as a fact cifmw_openshift_login_password={{ cifmw_openshift_login_password_file_slurp.content | b64decode }}, cacheable=True] *** 2026-01-22 06:47:15,770 p=31191 u=zuul n=ansible | Thursday 22 January 2026 06:47:15 +0000 (0:00:00.030) 0:02:37.321 ****** 2026-01-22 06:47:15,770 p=31191 u=zuul n=ansible | Thursday 22 January 2026 06:47:15 +0000 (0:00:00.030) 0:02:37.320 ****** 2026-01-22 06:47:15,789 p=31191 u=zuul n=ansible | skipping: [controller] 2026-01-22 06:47:15,796 p=31191 u=zuul n=ansible | TASK [openshift_login : Set role variables cifmw_openshift_login_kubeconfig={{ cifmw_openshift_login_kubeconfig | default(cifmw_openshift_kubeconfig) | default( ansible_env.KUBECONFIG if 'KUBECONFIG' in ansible_env else cifmw_openshift_login_kubeconfig_default_path ) | trim }}, cifmw_openshift_login_user={{ cifmw_openshift_login_user | default(cifmw_openshift_user) | default(omit) }}, cifmw_openshift_login_password={{********** cifmw_openshift_login_password | default(cifmw_openshift_password) | default(omit) }}, cifmw_openshift_login_api={{ cifmw_openshift_login_api | default(cifmw_openshift_api) | default(omit) }}, cifmw_openshift_login_cert_login={{ cifmw_openshift_login_cert_login | default(false)}}, cifmw_openshift_login_provided_token={{ cifmw_openshift_provided_token | default(omit) }}, cacheable=True] *** 2026-01-22 06:47:15,796 p=31191 u=zuul n=ansible | Thursday 22 January 2026 06:47:15 +0000 (0:00:00.026) 0:02:37.348 ****** 2026-01-22 06:47:15,796 p=31191 u=zuul n=ansible | Thursday 22 January 2026 06:47:15 +0000 (0:00:00.026) 0:02:37.347 ****** 2026-01-22 06:47:15,829 p=31191 u=zuul n=ansible | ok: [controller] 2026-01-22 06:47:15,842 p=31191 u=zuul n=ansible | TASK [openshift_login : Check if kubeconfig exists path={{ cifmw_openshift_login_kubeconfig }}] *** 2026-01-22 06:47:15,842 p=31191 u=zuul n=ansible | Thursday 22 January 2026 06:47:15 +0000 (0:00:00.045) 0:02:37.394 ****** 2026-01-22 06:47:15,842 p=31191 u=zuul n=ansible | Thursday 22 January 2026 06:47:15 +0000 (0:00:00.045) 0:02:37.392 ****** 2026-01-22 06:47:16,054 p=31191 u=zuul n=ansible | ok: [controller] 2026-01-22 06:47:16,062 p=31191 u=zuul n=ansible | TASK [openshift_login : Assert that enough data is provided to log in to OpenShift that=cifmw_openshift_login_kubeconfig_stat.stat.exists or (cifmw_openshift_login_provided_token is defined and cifmw_openshift_login_provided_token != '') or ( (cifmw_openshift_login_user is defined) and (cifmw_openshift_login_password is defined) and (cifmw_openshift_login_api is defined) ), msg=If an existing kubeconfig is not provided user/pwd or provided/initial token and API URL must be given] *** 2026-01-22 06:47:16,062 p=31191 u=zuul n=ansible | Thursday 22 January 2026 06:47:16 +0000 (0:00:00.220) 0:02:37.614 ****** 2026-01-22 06:47:16,062 p=31191 u=zuul n=ansible | Thursday 22 January 2026 06:47:16 +0000 (0:00:00.220) 0:02:37.612 ****** 2026-01-22 06:47:16,105 p=31191 u=zuul n=ansible | ok: [controller] => changed: false msg: All assertions passed 2026-01-22 06:47:16,112 p=31191 u=zuul n=ansible | TASK [openshift_login : Fetch kubeconfig content src={{ cifmw_openshift_login_kubeconfig }}] *** 2026-01-22 06:47:16,112 p=31191 u=zuul n=ansible | Thursday 22 January 2026 06:47:16 +0000 (0:00:00.050) 0:02:37.664 ****** 2026-01-22 06:47:16,112 p=31191 u=zuul n=ansible | Thursday 22 January 2026 06:47:16 +0000 (0:00:00.050) 0:02:37.663 ****** 2026-01-22 06:47:16,137 p=31191 u=zuul n=ansible | skipping: [controller] 2026-01-22 06:47:16,150 p=31191 u=zuul n=ansible | TASK [openshift_login : Fetch x509 key based users cifmw_openshift_login_key_based_users={{ ( cifmw_openshift_login_kubeconfig_content_b64.content | b64decode | from_yaml ). users | default([]) | selectattr('user.client-certificate-data', 'defined') | map(attribute="name") | map("split", "/") | map("first") }}, cacheable=True] *** 2026-01-22 06:47:16,150 p=31191 u=zuul n=ansible | Thursday 22 January 2026 06:47:16 +0000 (0:00:00.037) 0:02:37.702 ****** 2026-01-22 06:47:16,150 p=31191 u=zuul n=ansible | Thursday 22 January 2026 06:47:16 +0000 (0:00:00.037) 0:02:37.700 ****** 2026-01-22 06:47:16,173 p=31191 u=zuul n=ansible | skipping: [controller] 2026-01-22 06:47:16,185 p=31191 u=zuul n=ansible | TASK [openshift_login : Assign key based user if not provided and available cifmw_openshift_login_user={{ (cifmw_openshift_login_assume_cert_system_user | ternary('system:', '')) + (cifmw_openshift_login_key_based_users | map('replace', 'system:', '') | unique | first) }}, cifmw_openshift_login_cert_login=True, cacheable=True] *** 2026-01-22 06:47:16,185 p=31191 u=zuul n=ansible | Thursday 22 January 2026 06:47:16 +0000 (0:00:00.035) 0:02:37.737 ****** 2026-01-22 06:47:16,186 p=31191 u=zuul n=ansible | Thursday 22 January 2026 06:47:16 +0000 (0:00:00.035) 0:02:37.736 ****** 2026-01-22 06:47:16,209 p=31191 u=zuul n=ansible | skipping: [controller] 2026-01-22 06:47:16,218 p=31191 u=zuul n=ansible | TASK [openshift_login : Set the retry count cifmw_openshift_login_retries_cnt={{ 0 if cifmw_openshift_login_retries_cnt is undefined else cifmw_openshift_login_retries_cnt|int + 1 }}] *** 2026-01-22 06:47:16,218 p=31191 u=zuul n=ansible | Thursday 22 January 2026 06:47:16 +0000 (0:00:00.032) 0:02:37.770 ****** 2026-01-22 06:47:16,219 p=31191 u=zuul n=ansible | Thursday 22 January 2026 06:47:16 +0000 (0:00:00.032) 0:02:37.769 ****** 2026-01-22 06:47:16,248 p=31191 u=zuul n=ansible | ok: [controller] 2026-01-22 06:47:16,257 p=31191 u=zuul n=ansible | TASK [openshift_login : Fetch token _raw_params=try_login.yml] ***************** 2026-01-22 06:47:16,257 p=31191 u=zuul n=ansible | Thursday 22 January 2026 06:47:16 +0000 (0:00:00.038) 0:02:37.809 ****** 2026-01-22 06:47:16,258 p=31191 u=zuul n=ansible | Thursday 22 January 2026 06:47:16 +0000 (0:00:00.038) 0:02:37.808 ****** 2026-01-22 06:47:16,285 p=31191 u=zuul n=ansible | included: /home/zuul/src/github.com/openstack-k8s-operators/ci-framework/roles/openshift_login/tasks/try_login.yml for controller 2026-01-22 06:47:16,298 p=31191 u=zuul n=ansible | TASK [openshift_login : Try get OpenShift access token _raw_params=oc whoami -t] *** 2026-01-22 06:47:16,298 p=31191 u=zuul n=ansible | Thursday 22 January 2026 06:47:16 +0000 (0:00:00.040) 0:02:37.850 ****** 2026-01-22 06:47:16,299 p=31191 u=zuul n=ansible | Thursday 22 January 2026 06:47:16 +0000 (0:00:00.040) 0:02:37.849 ****** 2026-01-22 06:47:16,316 p=31191 u=zuul n=ansible | skipping: [controller] 2026-01-22 06:47:16,323 p=31191 u=zuul n=ansible | TASK [openshift_login : Fetch OpenShift token output_dir={{ cifmw_openshift_login_basedir }}/artifacts, script=oc login {%- if cifmw_openshift_login_provided_token is not defined %} {%- if cifmw_openshift_login_user is defined %} -u {{ cifmw_openshift_login_user }} {%- endif %} {%- if cifmw_openshift_login_password is defined %} -p {{ cifmw_openshift_login_password }} {%- endif %} {% else %} --token={{ cifmw_openshift_login_provided_token }} {%- endif %} {%- if cifmw_openshift_login_skip_tls_verify|bool %} --insecure-skip-tls-verify=true {%- endif %} {%- if cifmw_openshift_login_api is defined %} {{ cifmw_openshift_login_api }} {%- endif %}] *** 2026-01-22 06:47:16,323 p=31191 u=zuul n=ansible | Thursday 22 January 2026 06:47:16 +0000 (0:00:00.024) 0:02:37.875 ****** 2026-01-22 06:47:16,323 p=31191 u=zuul n=ansible | Thursday 22 January 2026 06:47:16 +0000 (0:00:00.024) 0:02:37.874 ****** 2026-01-22 06:47:16,468 p=31191 u=zuul n=ansible | Follow script's output here: /home/zuul/ci-framework-data/logs/ci_script_002_fetch_openshift.log 2026-01-22 06:47:17,109 p=31191 u=zuul n=ansible | changed: [controller] 2026-01-22 06:47:17,124 p=31191 u=zuul n=ansible | TASK [openshift_login : Ensure kubeconfig is provided that=cifmw_openshift_login_kubeconfig != ""] *** 2026-01-22 06:47:17,124 p=31191 u=zuul n=ansible | Thursday 22 January 2026 06:47:17 +0000 (0:00:00.801) 0:02:38.676 ****** 2026-01-22 06:47:17,125 p=31191 u=zuul n=ansible | Thursday 22 January 2026 06:47:17 +0000 (0:00:00.801) 0:02:38.675 ****** 2026-01-22 06:47:17,148 p=31191 u=zuul n=ansible | ok: [controller] => changed: false msg: All assertions passed 2026-01-22 06:47:17,158 p=31191 u=zuul n=ansible | TASK [openshift_login : Fetch new OpenShift access token _raw_params=oc whoami -t] *** 2026-01-22 06:47:17,158 p=31191 u=zuul n=ansible | Thursday 22 January 2026 06:47:17 +0000 (0:00:00.033) 0:02:38.710 ****** 2026-01-22 06:47:17,158 p=31191 u=zuul n=ansible | Thursday 22 January 2026 06:47:17 +0000 (0:00:00.033) 0:02:38.709 ****** 2026-01-22 06:47:17,530 p=31191 u=zuul n=ansible | changed: [controller] 2026-01-22 06:47:17,537 p=31191 u=zuul n=ansible | TASK [openshift_login : Set new OpenShift token cifmw_openshift_login_token={{ (not cifmw_openshift_login_new_token_out.skipped | default(false)) | ternary(cifmw_openshift_login_new_token_out.stdout, cifmw_openshift_login_whoami_out.stdout) }}, cacheable=True] *** 2026-01-22 06:47:17,537 p=31191 u=zuul n=ansible | Thursday 22 January 2026 06:47:17 +0000 (0:00:00.378) 0:02:39.089 ****** 2026-01-22 06:47:17,537 p=31191 u=zuul n=ansible | Thursday 22 January 2026 06:47:17 +0000 (0:00:00.378) 0:02:39.087 ****** 2026-01-22 06:47:17,563 p=31191 u=zuul n=ansible | ok: [controller] 2026-01-22 06:47:17,571 p=31191 u=zuul n=ansible | TASK [openshift_login : Fetch OpenShift API URL _raw_params=oc whoami --show-server=true] *** 2026-01-22 06:47:17,571 p=31191 u=zuul n=ansible | Thursday 22 January 2026 06:47:17 +0000 (0:00:00.034) 0:02:39.123 ****** 2026-01-22 06:47:17,571 p=31191 u=zuul n=ansible | Thursday 22 January 2026 06:47:17 +0000 (0:00:00.034) 0:02:39.122 ****** 2026-01-22 06:47:17,945 p=31191 u=zuul n=ansible | changed: [controller] 2026-01-22 06:47:17,953 p=31191 u=zuul n=ansible | TASK [openshift_login : Fetch OpenShift kubeconfig context _raw_params=oc whoami -c] *** 2026-01-22 06:47:17,953 p=31191 u=zuul n=ansible | Thursday 22 January 2026 06:47:17 +0000 (0:00:00.381) 0:02:39.505 ****** 2026-01-22 06:47:17,953 p=31191 u=zuul n=ansible | Thursday 22 January 2026 06:47:17 +0000 (0:00:00.381) 0:02:39.504 ****** 2026-01-22 06:47:18,321 p=31191 u=zuul n=ansible | changed: [controller] 2026-01-22 06:47:18,333 p=31191 u=zuul n=ansible | TASK [openshift_login : Fetch OpenShift current user _raw_params=oc whoami] **** 2026-01-22 06:47:18,333 p=31191 u=zuul n=ansible | Thursday 22 January 2026 06:47:18 +0000 (0:00:00.379) 0:02:39.884 ****** 2026-01-22 06:47:18,333 p=31191 u=zuul n=ansible | Thursday 22 January 2026 06:47:18 +0000 (0:00:00.379) 0:02:39.883 ****** 2026-01-22 06:47:18,734 p=31191 u=zuul n=ansible | changed: [controller] 2026-01-22 06:47:18,752 p=31191 u=zuul n=ansible | TASK [openshift_login : Set OpenShift user, context and API facts cifmw_openshift_login_api={{ cifmw_openshift_login_api_out.stdout }}, cifmw_openshift_login_context={{ cifmw_openshift_login_context_out.stdout }}, cifmw_openshift_login_user={{ _oauth_user }}, cifmw_openshift_kubeconfig={{ cifmw_openshift_login_kubeconfig }}, cifmw_openshift_api={{ cifmw_openshift_login_api_out.stdout }}, cifmw_openshift_context={{ cifmw_openshift_login_context_out.stdout }}, cifmw_openshift_user={{ _oauth_user }}, cifmw_openshift_token={{ cifmw_openshift_login_token | default(omit) }}, cifmw_install_yamls_environment={{ ( cifmw_install_yamls_environment | combine({'KUBECONFIG': cifmw_openshift_login_kubeconfig}) ) if cifmw_install_yamls_environment is defined else omit }}, cacheable=True] *** 2026-01-22 06:47:18,752 p=31191 u=zuul n=ansible | Thursday 22 January 2026 06:47:18 +0000 (0:00:00.419) 0:02:40.304 ****** 2026-01-22 06:47:18,752 p=31191 u=zuul n=ansible | Thursday 22 January 2026 06:47:18 +0000 (0:00:00.419) 0:02:40.302 ****** 2026-01-22 06:47:18,785 p=31191 u=zuul n=ansible | ok: [controller] 2026-01-22 06:47:18,794 p=31191 u=zuul n=ansible | TASK [openshift_login : Create the openshift_login parameters file dest={{ cifmw_basedir }}/artifacts/parameters/openshift-login-params.yml, content={{ cifmw_openshift_login_params_content | from_yaml | to_nice_yaml }}, mode=0600] *** 2026-01-22 06:47:18,794 p=31191 u=zuul n=ansible | Thursday 22 January 2026 06:47:18 +0000 (0:00:00.042) 0:02:40.346 ****** 2026-01-22 06:47:18,794 p=31191 u=zuul n=ansible | Thursday 22 January 2026 06:47:18 +0000 (0:00:00.042) 0:02:40.345 ****** 2026-01-22 06:47:19,502 p=31191 u=zuul n=ansible | changed: [controller] 2026-01-22 06:47:19,513 p=31191 u=zuul n=ansible | TASK [openshift_login : Read the install yamls parameters file path={{ cifmw_basedir }}/artifacts/parameters/install-yamls-params.yml] *** 2026-01-22 06:47:19,513 p=31191 u=zuul n=ansible | Thursday 22 January 2026 06:47:19 +0000 (0:00:00.718) 0:02:41.065 ****** 2026-01-22 06:47:19,513 p=31191 u=zuul n=ansible | Thursday 22 January 2026 06:47:19 +0000 (0:00:00.718) 0:02:41.063 ****** 2026-01-22 06:47:19,861 p=31191 u=zuul n=ansible | ok: [controller] 2026-01-22 06:47:19,869 p=31191 u=zuul n=ansible | TASK [openshift_login : Append the KUBECONFIG to the install yamls parameters content={{ cifmw_openshift_login_install_yamls_artifacts_slurp['content'] | b64decode | from_yaml | combine( { 'cifmw_install_yamls_environment': { 'KUBECONFIG': cifmw_openshift_login_kubeconfig } }, recursive=true) | to_nice_yaml }}, dest={{ cifmw_basedir }}/artifacts/parameters/install-yamls-params.yml, mode=0600] *** 2026-01-22 06:47:19,869 p=31191 u=zuul n=ansible | Thursday 22 January 2026 06:47:19 +0000 (0:00:00.355) 0:02:41.421 ****** 2026-01-22 06:47:19,869 p=31191 u=zuul n=ansible | Thursday 22 January 2026 06:47:19 +0000 (0:00:00.355) 0:02:41.419 ****** 2026-01-22 06:47:20,597 p=31191 u=zuul n=ansible | changed: [controller] 2026-01-22 06:47:20,615 p=31191 u=zuul n=ansible | TASK [openshift_setup : Ensure output directory exists path={{ cifmw_openshift_setup_basedir }}/artifacts, state=directory, mode=0755] *** 2026-01-22 06:47:20,615 p=31191 u=zuul n=ansible | Thursday 22 January 2026 06:47:20 +0000 (0:00:00.746) 0:02:42.167 ****** 2026-01-22 06:47:20,615 p=31191 u=zuul n=ansible | Thursday 22 January 2026 06:47:20 +0000 (0:00:00.746) 0:02:42.166 ****** 2026-01-22 06:47:20,848 p=31191 u=zuul n=ansible | ok: [controller] 2026-01-22 06:47:20,864 p=31191 u=zuul n=ansible | TASK [openshift_setup : Fetch namespaces to create cifmw_openshift_setup_namespaces={{ (( ([cifmw_install_yamls_defaults['NAMESPACE']] + ([cifmw_install_yamls_defaults['OPERATOR_NAMESPACE']] if 'OPERATOR_NAMESPACE' is in cifmw_install_yamls_defaults else []) ) if cifmw_install_yamls_defaults is defined else [] ) + cifmw_openshift_setup_create_namespaces) | unique }}] *** 2026-01-22 06:47:20,864 p=31191 u=zuul n=ansible | Thursday 22 January 2026 06:47:20 +0000 (0:00:00.249) 0:02:42.416 ****** 2026-01-22 06:47:20,864 p=31191 u=zuul n=ansible | Thursday 22 January 2026 06:47:20 +0000 (0:00:00.249) 0:02:42.415 ****** 2026-01-22 06:47:20,893 p=31191 u=zuul n=ansible | ok: [controller] 2026-01-22 06:47:20,915 p=31191 u=zuul n=ansible | TASK [openshift_setup : Create required namespaces kubeconfig={{ cifmw_openshift_kubeconfig }}, api_key={{ cifmw_openshift_token | default(omit)}}, context={{ cifmw_openshift_context | default(omit) }}, name={{ item }}, kind=Namespace, state=present] *** 2026-01-22 06:47:20,915 p=31191 u=zuul n=ansible | Thursday 22 January 2026 06:47:20 +0000 (0:00:00.050) 0:02:42.467 ****** 2026-01-22 06:47:20,915 p=31191 u=zuul n=ansible | Thursday 22 January 2026 06:47:20 +0000 (0:00:00.050) 0:02:42.466 ****** 2026-01-22 06:47:22,035 p=31191 u=zuul n=ansible | changed: [controller] => (item=openstack) 2026-01-22 06:47:22,944 p=31191 u=zuul n=ansible | changed: [controller] => (item=openstack-operators) 2026-01-22 06:47:22,955 p=31191 u=zuul n=ansible | TASK [openshift_setup : Get internal OpenShift registry route kubeconfig={{ cifmw_openshift_kubeconfig }}, api_key={{ cifmw_openshift_token | default(omit)}}, context={{ cifmw_openshift_context | default(omit)}}, kind=Route, name=default-route, namespace=openshift-image-registry] *** 2026-01-22 06:47:22,955 p=31191 u=zuul n=ansible | Thursday 22 January 2026 06:47:22 +0000 (0:00:02.039) 0:02:44.507 ****** 2026-01-22 06:47:22,955 p=31191 u=zuul n=ansible | Thursday 22 January 2026 06:47:22 +0000 (0:00:02.039) 0:02:44.505 ****** 2026-01-22 06:47:22,977 p=31191 u=zuul n=ansible | skipping: [controller] 2026-01-22 06:47:22,985 p=31191 u=zuul n=ansible | TASK [openshift_setup : Allow anonymous image-pulls in CRC registry for targeted namespaces state=present, kubeconfig={{ cifmw_openshift_kubeconfig }}, api_key={{ cifmw_openshift_token | default(omit)}}, context={{ cifmw_openshift_context | default(omit)}}, definition={'kind': 'RoleBinding', 'apiVersion': 'rbac.authorization.k8s.io/v1', 'metadata': {'name': 'system:image-puller', 'namespace': '{{ item }}'}, 'subjects': [{'kind': 'User', 'name': 'system:anonymous'}, {'kind': 'User', 'name': 'system:unauthenticated'}], 'roleRef': {'kind': 'ClusterRole', 'name': 'system:image-puller'}}] *** 2026-01-22 06:47:22,985 p=31191 u=zuul n=ansible | Thursday 22 January 2026 06:47:22 +0000 (0:00:00.030) 0:02:44.537 ****** 2026-01-22 06:47:22,985 p=31191 u=zuul n=ansible | Thursday 22 January 2026 06:47:22 +0000 (0:00:00.030) 0:02:44.535 ****** 2026-01-22 06:47:23,011 p=31191 u=zuul n=ansible | skipping: [controller] => (item=openstack) 2026-01-22 06:47:23,011 p=31191 u=zuul n=ansible | skipping: [controller] => (item=openstack-operators) 2026-01-22 06:47:23,012 p=31191 u=zuul n=ansible | skipping: [controller] 2026-01-22 06:47:23,018 p=31191 u=zuul n=ansible | TASK [openshift_setup : Wait for the image registry to be ready kind=Deployment, name=image-registry, namespace=openshift-image-registry, kubeconfig={{ cifmw_openshift_kubeconfig }}, api_key={{ cifmw_openshift_token | default(omit)}}, context={{ cifmw_openshift_context | default(omit)}}, wait=True, wait_sleep=10, wait_timeout=600, wait_condition={'type': 'Available', 'status': 'True'}] *** 2026-01-22 06:47:23,018 p=31191 u=zuul n=ansible | Thursday 22 January 2026 06:47:23 +0000 (0:00:00.033) 0:02:44.570 ****** 2026-01-22 06:47:23,018 p=31191 u=zuul n=ansible | Thursday 22 January 2026 06:47:23 +0000 (0:00:00.033) 0:02:44.569 ****** 2026-01-22 06:47:23,043 p=31191 u=zuul n=ansible | skipping: [controller] 2026-01-22 06:47:23,054 p=31191 u=zuul n=ansible | TASK [openshift_setup : Login into OpenShift internal registry output_dir={{ cifmw_openshift_setup_basedir }}/artifacts, script=podman login -u {{ cifmw_openshift_user }} -p {{ cifmw_openshift_token }} {%- if cifmw_openshift_setup_skip_internal_registry_tls_verify|bool %} --tls-verify=false {%- endif %} {{ cifmw_openshift_setup_registry_default_route.resources[0].spec.host }}] *** 2026-01-22 06:47:23,054 p=31191 u=zuul n=ansible | Thursday 22 January 2026 06:47:23 +0000 (0:00:00.035) 0:02:44.606 ****** 2026-01-22 06:47:23,054 p=31191 u=zuul n=ansible | Thursday 22 January 2026 06:47:23 +0000 (0:00:00.035) 0:02:44.605 ****** 2026-01-22 06:47:23,076 p=31191 u=zuul n=ansible | skipping: [controller] 2026-01-22 06:47:23,087 p=31191 u=zuul n=ansible | TASK [Ensure we have custom CA installed on host role=install_ca] ************** 2026-01-22 06:47:23,087 p=31191 u=zuul n=ansible | Thursday 22 January 2026 06:47:23 +0000 (0:00:00.032) 0:02:44.639 ****** 2026-01-22 06:47:23,087 p=31191 u=zuul n=ansible | Thursday 22 January 2026 06:47:23 +0000 (0:00:00.032) 0:02:44.637 ****** 2026-01-22 06:47:23,107 p=31191 u=zuul n=ansible | skipping: [controller] 2026-01-22 06:47:23,117 p=31191 u=zuul n=ansible | TASK [openshift_setup : Update ca bundle _raw_params=update-ca-trust extract] *** 2026-01-22 06:47:23,117 p=31191 u=zuul n=ansible | Thursday 22 January 2026 06:47:23 +0000 (0:00:00.030) 0:02:44.669 ****** 2026-01-22 06:47:23,117 p=31191 u=zuul n=ansible | Thursday 22 January 2026 06:47:23 +0000 (0:00:00.030) 0:02:44.668 ****** 2026-01-22 06:47:23,137 p=31191 u=zuul n=ansible | skipping: [controller] 2026-01-22 06:47:23,146 p=31191 u=zuul n=ansible | TASK [openshift_setup : Slurp CAs file src={{ cifmw_openshift_setup_ca_bundle_path }}] *** 2026-01-22 06:47:23,146 p=31191 u=zuul n=ansible | Thursday 22 January 2026 06:47:23 +0000 (0:00:00.028) 0:02:44.698 ****** 2026-01-22 06:47:23,146 p=31191 u=zuul n=ansible | Thursday 22 January 2026 06:47:23 +0000 (0:00:00.028) 0:02:44.696 ****** 2026-01-22 06:47:23,167 p=31191 u=zuul n=ansible | skipping: [controller] 2026-01-22 06:47:23,176 p=31191 u=zuul n=ansible | TASK [openshift_setup : Create config map with registry CAs kubeconfig={{ cifmw_openshift_kubeconfig }}, api_key={{ cifmw_openshift_token | default(omit)}}, context={{ cifmw_openshift_context | default(omit)}}, definition={'apiVersion': 'v1', 'kind': 'ConfigMap', 'metadata': {'namespace': 'openshift-config', 'name': 'registry-cas'}, 'data': '{{ _config_map_data | items2dict }}'}] *** 2026-01-22 06:47:23,176 p=31191 u=zuul n=ansible | Thursday 22 January 2026 06:47:23 +0000 (0:00:00.030) 0:02:44.728 ****** 2026-01-22 06:47:23,176 p=31191 u=zuul n=ansible | Thursday 22 January 2026 06:47:23 +0000 (0:00:00.030) 0:02:44.727 ****** 2026-01-22 06:47:23,199 p=31191 u=zuul n=ansible | skipping: [controller] 2026-01-22 06:47:23,207 p=31191 u=zuul n=ansible | TASK [openshift_setup : Install Red Hat CA for pulling images from internal registry kubeconfig={{ cifmw_openshift_kubeconfig }}, api_key={{ cifmw_openshift_token | default(omit)}}, context={{ cifmw_openshift_context | default(omit)}}, merge_type=merge, definition={'apiVersion': 'config.openshift.io/v1', 'kind': 'Image', 'metadata': {'name': 'cluster'}, 'spec': {'additionalTrustedCA': {'name': 'registry-cas'}}}] *** 2026-01-22 06:47:23,207 p=31191 u=zuul n=ansible | Thursday 22 January 2026 06:47:23 +0000 (0:00:00.030) 0:02:44.759 ****** 2026-01-22 06:47:23,207 p=31191 u=zuul n=ansible | Thursday 22 January 2026 06:47:23 +0000 (0:00:00.030) 0:02:44.758 ****** 2026-01-22 06:47:23,229 p=31191 u=zuul n=ansible | skipping: [controller] 2026-01-22 06:47:23,239 p=31191 u=zuul n=ansible | TASK [openshift_setup : Add insecure registry kubeconfig={{ cifmw_openshift_kubeconfig }}, api_key={{ cifmw_openshift_token | default(omit)}}, context={{ cifmw_openshift_context | default(omit)}}, merge_type=merge, definition={'apiVersion': 'config.openshift.io/v1', 'kind': 'Image', 'metadata': {'name': 'cluster'}, 'spec': {'registrySources': {'insecureRegistries': ['{{ cifmw_update_containers_registry }}'], 'allowedRegistries': '{{ all_registries }}'}}}] *** 2026-01-22 06:47:23,239 p=31191 u=zuul n=ansible | Thursday 22 January 2026 06:47:23 +0000 (0:00:00.031) 0:02:44.791 ****** 2026-01-22 06:47:23,239 p=31191 u=zuul n=ansible | Thursday 22 January 2026 06:47:23 +0000 (0:00:00.031) 0:02:44.789 ****** 2026-01-22 06:47:23,259 p=31191 u=zuul n=ansible | skipping: [controller] 2026-01-22 06:47:23,266 p=31191 u=zuul n=ansible | TASK [openshift_setup : Create a ICSP with repository digest mirrors kubeconfig={{ cifmw_openshift_kubeconfig }}, api_key={{ cifmw_openshift_token | default(omit)}}, context={{ cifmw_openshift_context | default(omit)}}, definition={'apiVersion': 'operator.openshift.io/v1alpha1', 'kind': 'ImageContentSourcePolicy', 'metadata': {'name': 'registry-digest-mirrors'}, 'spec': {'repositoryDigestMirrors': '{{ cifmw_openshift_setup_digest_mirrors }}'}}] *** 2026-01-22 06:47:23,266 p=31191 u=zuul n=ansible | Thursday 22 January 2026 06:47:23 +0000 (0:00:00.027) 0:02:44.818 ****** 2026-01-22 06:47:23,267 p=31191 u=zuul n=ansible | Thursday 22 January 2026 06:47:23 +0000 (0:00:00.027) 0:02:44.817 ****** 2026-01-22 06:47:23,290 p=31191 u=zuul n=ansible | skipping: [controller] 2026-01-22 06:47:23,299 p=31191 u=zuul n=ansible | TASK [openshift_setup : Gather network.operator info kubeconfig={{ cifmw_openshift_kubeconfig }}, api_key={{ cifmw_openshift_token | default(omit)}}, context={{ cifmw_openshift_context | default(omit)}}, api_version=operator.openshift.io/v1, kind=Network, name=cluster] *** 2026-01-22 06:47:23,300 p=31191 u=zuul n=ansible | Thursday 22 January 2026 06:47:23 +0000 (0:00:00.033) 0:02:44.851 ****** 2026-01-22 06:47:23,300 p=31191 u=zuul n=ansible | Thursday 22 January 2026 06:47:23 +0000 (0:00:00.033) 0:02:44.850 ****** 2026-01-22 06:47:24,375 p=31191 u=zuul n=ansible | ok: [controller] 2026-01-22 06:47:24,391 p=31191 u=zuul n=ansible | TASK [openshift_setup : Patch network operator api_version=operator.openshift.io/v1, kubeconfig={{ cifmw_openshift_kubeconfig }}, kind=Network, name=cluster, persist_config=True, patch=[{'path': '/spec/defaultNetwork/ovnKubernetesConfig/gatewayConfig/routingViaHost', 'value': True, 'op': 'replace'}, {'path': '/spec/defaultNetwork/ovnKubernetesConfig/gatewayConfig/ipForwarding', 'value': 'Global', 'op': 'replace'}]] *** 2026-01-22 06:47:24,391 p=31191 u=zuul n=ansible | Thursday 22 January 2026 06:47:24 +0000 (0:00:01.091) 0:02:45.943 ****** 2026-01-22 06:47:24,391 p=31191 u=zuul n=ansible | Thursday 22 January 2026 06:47:24 +0000 (0:00:01.091) 0:02:45.941 ****** 2026-01-22 06:47:25,402 p=31191 u=zuul n=ansible | changed: [controller] 2026-01-22 06:47:25,411 p=31191 u=zuul n=ansible | TASK [openshift_setup : Patch samples registry configuration kubeconfig={{ cifmw_openshift_kubeconfig }}, api_key={{ cifmw_openshift_token | default(omit)}}, context={{ cifmw_openshift_context | default(omit)}}, api_version=samples.operator.openshift.io/v1, kind=Config, name=cluster, patch=[{'op': 'replace', 'path': '/spec/samplesRegistry', 'value': 'registry.redhat.io'}]] *** 2026-01-22 06:47:25,411 p=31191 u=zuul n=ansible | Thursday 22 January 2026 06:47:25 +0000 (0:00:01.020) 0:02:46.963 ****** 2026-01-22 06:47:25,411 p=31191 u=zuul n=ansible | Thursday 22 January 2026 06:47:25 +0000 (0:00:01.020) 0:02:46.962 ****** 2026-01-22 06:47:26,282 p=31191 u=zuul n=ansible | changed: [controller] 2026-01-22 06:47:26,291 p=31191 u=zuul n=ansible | TASK [openshift_setup : Delete the pods from openshift-marketplace namespace kind=Pod, state=absent, delete_all=True, kubeconfig={{ cifmw_openshift_kubeconfig }}, namespace=openshift-marketplace] *** 2026-01-22 06:47:26,291 p=31191 u=zuul n=ansible | Thursday 22 January 2026 06:47:26 +0000 (0:00:00.880) 0:02:47.843 ****** 2026-01-22 06:47:26,291 p=31191 u=zuul n=ansible | Thursday 22 January 2026 06:47:26 +0000 (0:00:00.880) 0:02:47.842 ****** 2026-01-22 06:47:26,307 p=31191 u=zuul n=ansible | skipping: [controller] 2026-01-22 06:47:26,315 p=31191 u=zuul n=ansible | TASK [openshift_setup : Wait for openshift-marketplace pods to be running _raw_params=oc wait pod --all --for=condition=Ready -n openshift-marketplace --timeout=1m] *** 2026-01-22 06:47:26,315 p=31191 u=zuul n=ansible | Thursday 22 January 2026 06:47:26 +0000 (0:00:00.023) 0:02:47.867 ****** 2026-01-22 06:47:26,315 p=31191 u=zuul n=ansible | Thursday 22 January 2026 06:47:26 +0000 (0:00:00.023) 0:02:47.866 ****** 2026-01-22 06:47:26,335 p=31191 u=zuul n=ansible | skipping: [controller] 2026-01-22 06:47:26,354 p=31191 u=zuul n=ansible | TASK [Deploy Observability operator. name=openshift_obs] *********************** 2026-01-22 06:47:26,354 p=31191 u=zuul n=ansible | Thursday 22 January 2026 06:47:26 +0000 (0:00:00.038) 0:02:47.906 ****** 2026-01-22 06:47:26,354 p=31191 u=zuul n=ansible | Thursday 22 January 2026 06:47:26 +0000 (0:00:00.038) 0:02:47.904 ****** 2026-01-22 06:47:26,436 p=31191 u=zuul n=ansible | TASK [openshift_obs : Install cluster observability operator. definition={{cifmw_openshift_obs_definition }}, kubeconfig={{ cifmw_openshift_kubeconfig }}, state=present] *** 2026-01-22 06:47:26,436 p=31191 u=zuul n=ansible | Thursday 22 January 2026 06:47:26 +0000 (0:00:00.081) 0:02:47.988 ****** 2026-01-22 06:47:26,436 p=31191 u=zuul n=ansible | Thursday 22 January 2026 06:47:26 +0000 (0:00:00.081) 0:02:47.986 ****** 2026-01-22 06:47:27,367 p=31191 u=zuul n=ansible | changed: [controller] 2026-01-22 06:47:27,383 p=31191 u=zuul n=ansible | TASK [openshift_obs : Wait for observability operator deployment kind=Deployment, namespace=openshift-operators, name=observability-operator, wait=True, wait_timeout=300, wait_condition={'type': 'Available', 'status': 'True'}, kubeconfig={{ cifmw_openshift_kubeconfig }}] *** 2026-01-22 06:47:27,383 p=31191 u=zuul n=ansible | Thursday 22 January 2026 06:47:27 +0000 (0:00:00.947) 0:02:48.935 ****** 2026-01-22 06:47:27,383 p=31191 u=zuul n=ansible | Thursday 22 January 2026 06:47:27 +0000 (0:00:00.947) 0:02:48.934 ****** 2026-01-22 06:48:38,523 p=31191 u=zuul n=ansible | ok: [controller] 2026-01-22 06:48:38,532 p=31191 u=zuul n=ansible | TASK [openshift_obs : Wait for observability-operator pod kind=Pod, namespace=openshift-operators, label_selectors=['app.kubernetes.io/name = observability-operator'], wait=True, wait_timeout=300, wait_condition={'type': 'Ready', 'status': 'True'}, kubeconfig={{ cifmw_openshift_kubeconfig }}] *** 2026-01-22 06:48:38,532 p=31191 u=zuul n=ansible | Thursday 22 January 2026 06:48:38 +0000 (0:01:11.149) 0:04:00.084 ****** 2026-01-22 06:48:38,532 p=31191 u=zuul n=ansible | Thursday 22 January 2026 06:48:38 +0000 (0:01:11.149) 0:04:00.083 ****** 2026-01-22 06:48:39,407 p=31191 u=zuul n=ansible | ok: [controller] 2026-01-22 06:48:39,425 p=31191 u=zuul n=ansible | TASK [Deploy Metal3 BMHs name=deploy_bmh] ************************************** 2026-01-22 06:48:39,425 p=31191 u=zuul n=ansible | Thursday 22 January 2026 06:48:39 +0000 (0:00:00.892) 0:04:00.977 ****** 2026-01-22 06:48:39,425 p=31191 u=zuul n=ansible | Thursday 22 January 2026 06:48:39 +0000 (0:00:00.892) 0:04:00.976 ****** 2026-01-22 06:48:39,445 p=31191 u=zuul n=ansible | skipping: [controller] 2026-01-22 06:48:39,453 p=31191 u=zuul n=ansible | TASK [Install certmanager operator role name=cert_manager] ********************* 2026-01-22 06:48:39,453 p=31191 u=zuul n=ansible | Thursday 22 January 2026 06:48:39 +0000 (0:00:00.028) 0:04:01.005 ****** 2026-01-22 06:48:39,454 p=31191 u=zuul n=ansible | Thursday 22 January 2026 06:48:39 +0000 (0:00:00.028) 0:04:01.004 ****** 2026-01-22 06:48:39,473 p=31191 u=zuul n=ansible | skipping: [controller] 2026-01-22 06:48:39,480 p=31191 u=zuul n=ansible | TASK [Configure hosts networking using nmstate name=ci_nmstate] **************** 2026-01-22 06:48:39,480 p=31191 u=zuul n=ansible | Thursday 22 January 2026 06:48:39 +0000 (0:00:00.026) 0:04:01.032 ****** 2026-01-22 06:48:39,480 p=31191 u=zuul n=ansible | Thursday 22 January 2026 06:48:39 +0000 (0:00:00.026) 0:04:01.031 ****** 2026-01-22 06:48:39,500 p=31191 u=zuul n=ansible | skipping: [controller] 2026-01-22 06:48:39,508 p=31191 u=zuul n=ansible | TASK [Configure multus networks name=ci_multus] ******************************** 2026-01-22 06:48:39,508 p=31191 u=zuul n=ansible | Thursday 22 January 2026 06:48:39 +0000 (0:00:00.027) 0:04:01.060 ****** 2026-01-22 06:48:39,508 p=31191 u=zuul n=ansible | Thursday 22 January 2026 06:48:39 +0000 (0:00:00.028) 0:04:01.059 ****** 2026-01-22 06:48:39,525 p=31191 u=zuul n=ansible | skipping: [controller] 2026-01-22 06:48:39,533 p=31191 u=zuul n=ansible | TASK [Deploy Sushy Emulator service pod name=sushy_emulator] ******************* 2026-01-22 06:48:39,533 p=31191 u=zuul n=ansible | Thursday 22 January 2026 06:48:39 +0000 (0:00:00.024) 0:04:01.085 ****** 2026-01-22 06:48:39,533 p=31191 u=zuul n=ansible | Thursday 22 January 2026 06:48:39 +0000 (0:00:00.024) 0:04:01.083 ****** 2026-01-22 06:48:39,549 p=31191 u=zuul n=ansible | skipping: [controller] 2026-01-22 06:48:39,556 p=31191 u=zuul n=ansible | TASK [Setup Libvirt on controller name=libvirt_manager] ************************ 2026-01-22 06:48:39,557 p=31191 u=zuul n=ansible | Thursday 22 January 2026 06:48:39 +0000 (0:00:00.023) 0:04:01.108 ****** 2026-01-22 06:48:39,557 p=31191 u=zuul n=ansible | Thursday 22 January 2026 06:48:39 +0000 (0:00:00.023) 0:04:01.107 ****** 2026-01-22 06:48:39,573 p=31191 u=zuul n=ansible | skipping: [controller] 2026-01-22 06:48:39,580 p=31191 u=zuul n=ansible | TASK [Prepare container package builder name=pkg_build] ************************ 2026-01-22 06:48:39,580 p=31191 u=zuul n=ansible | Thursday 22 January 2026 06:48:39 +0000 (0:00:00.023) 0:04:01.132 ****** 2026-01-22 06:48:39,580 p=31191 u=zuul n=ansible | Thursday 22 January 2026 06:48:39 +0000 (0:00:00.023) 0:04:01.130 ****** 2026-01-22 06:48:39,598 p=31191 u=zuul n=ansible | skipping: [controller] 2026-01-22 06:48:39,605 p=31191 u=zuul n=ansible | TASK [run_hook : Assert parameters are valid quiet=True, that=['_list_hooks is not string', '_list_hooks is not mapping', '_list_hooks is iterable', '(hooks | default([])) is not string', '(hooks | default([])) is not mapping', '(hooks | default([])) is iterable']] *** 2026-01-22 06:48:39,606 p=31191 u=zuul n=ansible | Thursday 22 January 2026 06:48:39 +0000 (0:00:00.025) 0:04:01.157 ****** 2026-01-22 06:48:39,606 p=31191 u=zuul n=ansible | Thursday 22 January 2026 06:48:39 +0000 (0:00:00.025) 0:04:01.156 ****** 2026-01-22 06:48:39,657 p=31191 u=zuul n=ansible | ok: [controller] 2026-01-22 06:48:39,665 p=31191 u=zuul n=ansible | TASK [run_hook : Assert single hooks are all mappings quiet=True, that=['_not_mapping_hooks | length == 0'], msg=All single hooks must be a list of mappings or a mapping.] *** 2026-01-22 06:48:39,666 p=31191 u=zuul n=ansible | Thursday 22 January 2026 06:48:39 +0000 (0:00:00.059) 0:04:01.217 ****** 2026-01-22 06:48:39,666 p=31191 u=zuul n=ansible | Thursday 22 January 2026 06:48:39 +0000 (0:00:00.059) 0:04:01.216 ****** 2026-01-22 06:48:39,748 p=31191 u=zuul n=ansible | ok: [controller] 2026-01-22 06:48:39,756 p=31191 u=zuul n=ansible | TASK [run_hook : Loop on hooks for post_infra _raw_params={{ hook.type }}.yml] *** 2026-01-22 06:48:39,757 p=31191 u=zuul n=ansible | Thursday 22 January 2026 06:48:39 +0000 (0:00:00.091) 0:04:01.308 ****** 2026-01-22 06:48:39,757 p=31191 u=zuul n=ansible | Thursday 22 January 2026 06:48:39 +0000 (0:00:00.091) 0:04:01.307 ****** 2026-01-22 06:48:39,841 p=31191 u=zuul n=ansible | skipping: [controller] 2026-01-22 06:48:39,880 p=31191 u=zuul n=ansible | TASK [Load parameters dir={{ item }}] ****************************************** 2026-01-22 06:48:39,880 p=31191 u=zuul n=ansible | Thursday 22 January 2026 06:48:39 +0000 (0:00:00.123) 0:04:01.432 ****** 2026-01-22 06:48:39,881 p=31191 u=zuul n=ansible | Thursday 22 January 2026 06:48:39 +0000 (0:00:00.123) 0:04:01.431 ****** 2026-01-22 06:48:39,985 p=31191 u=zuul n=ansible | ok: [controller] => (item=/home/zuul/ci-framework-data/artifacts/parameters) 2026-01-22 06:48:39,988 p=31191 u=zuul n=ansible | ok: [controller] => (item=/etc/ci/env) 2026-01-22 06:48:40,000 p=31191 u=zuul n=ansible | TASK [Ensure that the isolated net was configured for crc that=['crc_ci_bootstrap_networks_out is defined', 'crc_ci_bootstrap_networks_out[_crc_hostname] is defined', "crc_ci_bootstrap_networks_out[_crc_hostname]['default'] is defined"]] *** 2026-01-22 06:48:40,000 p=31191 u=zuul n=ansible | Thursday 22 January 2026 06:48:39 +0000 (0:00:00.120) 0:04:01.552 ****** 2026-01-22 06:48:40,001 p=31191 u=zuul n=ansible | Thursday 22 January 2026 06:48:39 +0000 (0:00:00.119) 0:04:01.551 ****** 2026-01-22 06:48:40,027 p=31191 u=zuul n=ansible | ok: [controller] => changed: false msg: All assertions passed 2026-01-22 06:48:40,035 p=31191 u=zuul n=ansible | TASK [Set facts for further usage within the framework cifmw_edpm_prepare_extra_vars={'NNCP_INTERFACE': '{{ crc_ci_bootstrap_networks_out.crc.default.iface }}', 'NETWORK_MTU': '{{ crc_ci_bootstrap_networks_out.crc.default.mtu }}', 'NNCP_DNS_SERVER': "{{\n cifmw_nncp_dns_server |\n default(crc_ci_bootstrap_networks_out[_crc_hostname].default.ip) |\n split('/') | first\n}}"}] *** 2026-01-22 06:48:40,035 p=31191 u=zuul n=ansible | Thursday 22 January 2026 06:48:40 +0000 (0:00:00.034) 0:04:01.587 ****** 2026-01-22 06:48:40,035 p=31191 u=zuul n=ansible | Thursday 22 January 2026 06:48:40 +0000 (0:00:00.034) 0:04:01.586 ****** 2026-01-22 06:48:40,055 p=31191 u=zuul n=ansible | ok: [controller] 2026-01-22 06:48:40,082 p=31191 u=zuul n=ansible | PLAY [Deploy Openstack Operators] ********************************************** 2026-01-22 06:48:40,097 p=31191 u=zuul n=ansible | TASK [Use the locally built operators if any _local_operators_indexes={{ _local_operators_indexes|default({}) | combine({ item.key.split('-')[0]|upper+'_IMG': cifmw_operator_build_output['operators'][item.key].image_catalog}) }}] *** 2026-01-22 06:48:40,098 p=31191 u=zuul n=ansible | Thursday 22 January 2026 06:48:40 +0000 (0:00:00.062) 0:04:01.649 ****** 2026-01-22 06:48:40,098 p=31191 u=zuul n=ansible | Thursday 22 January 2026 06:48:40 +0000 (0:00:00.062) 0:04:01.648 ****** 2026-01-22 06:48:40,127 p=31191 u=zuul n=ansible | ok: [controller] => (item={'key': 'openstack-operator', 'value': {'git_commit_hash': '06ba595c74a08db75e36471c34fbb9e36b766fb7', 'git_src_dir': '/home/zuul-worker/src/github.com/openstack-k8s-operators/openstack-operator', 'image': '38.102.83.50:5001/openstack-k8s-operators/openstack-operator:06ba595c74a08db75e36471c34fbb9e36b766fb7', 'image_bundle': '38.102.83.50:5001/openstack-k8s-operators/openstack-operator-bundle:06ba595c74a08db75e36471c34fbb9e36b766fb7', 'image_catalog': '38.102.83.50:5001/openstack-k8s-operators/openstack-operator-index:06ba595c74a08db75e36471c34fbb9e36b766fb7'}}) 2026-01-22 06:48:40,137 p=31191 u=zuul n=ansible | ok: [controller] => (item={'key': 'watcher-operator', 'value': {'git_commit_hash': '2bc4688cca96552e6b25883a5eb5cc7a0447d6d9', 'git_src_dir': '/home/zuul-worker/src/github.com/openstack-k8s-operators/watcher-operator', 'image': '38.102.83.50:5001/openstack-k8s-operators/watcher-operator:2bc4688cca96552e6b25883a5eb5cc7a0447d6d9', 'image_bundle': '38.102.83.50:5001/openstack-k8s-operators/watcher-operator-bundle:2bc4688cca96552e6b25883a5eb5cc7a0447d6d9', 'image_catalog': '38.102.83.50:5001/openstack-k8s-operators/watcher-operator-index:2bc4688cca96552e6b25883a5eb5cc7a0447d6d9'}}) 2026-01-22 06:48:40,146 p=31191 u=zuul n=ansible | TASK [Set install_yamls Makefile environment variables cifmw_edpm_prepare_common_env={{ cifmw_install_yamls_environment | combine({'PATH': cifmw_path}) | combine(cifmw_edpm_prepare_extra_vars | default({})) }}, cifmw_edpm_prepare_operators_build_output={{ operators_build_output }}, cifmw_edpm_prepare_make_openstack_env={{ _local_operators_indexes | combine(_openstack_operator_images) }}] *** 2026-01-22 06:48:40,146 p=31191 u=zuul n=ansible | Thursday 22 January 2026 06:48:40 +0000 (0:00:00.048) 0:04:01.698 ****** 2026-01-22 06:48:40,146 p=31191 u=zuul n=ansible | Thursday 22 January 2026 06:48:40 +0000 (0:00:00.048) 0:04:01.696 ****** 2026-01-22 06:48:40,170 p=31191 u=zuul n=ansible | ok: [controller] 2026-01-22 06:48:40,177 p=31191 u=zuul n=ansible | TASK [detect if openstack operator is installed _raw_params=oc get sub --ignore-not-found=true -n openstack-operators -o name openstack-operator] *** 2026-01-22 06:48:40,177 p=31191 u=zuul n=ansible | Thursday 22 January 2026 06:48:40 +0000 (0:00:00.031) 0:04:01.729 ****** 2026-01-22 06:48:40,177 p=31191 u=zuul n=ansible | Thursday 22 January 2026 06:48:40 +0000 (0:00:00.031) 0:04:01.728 ****** 2026-01-22 06:48:40,529 p=31191 u=zuul n=ansible | changed: [controller] 2026-01-22 06:48:40,540 p=31191 u=zuul n=ansible | TASK [Install openstack operator and wait for the csv to succeed name=install_yamls_makes, tasks_from=make_openstack_init] *** 2026-01-22 06:48:40,540 p=31191 u=zuul n=ansible | Thursday 22 January 2026 06:48:40 +0000 (0:00:00.362) 0:04:02.092 ****** 2026-01-22 06:48:40,540 p=31191 u=zuul n=ansible | Thursday 22 January 2026 06:48:40 +0000 (0:00:00.362) 0:04:02.090 ****** 2026-01-22 06:48:40,575 p=31191 u=zuul n=ansible | TASK [install_yamls_makes : Debug make_openstack_init_env var=make_openstack_init_env] *** 2026-01-22 06:48:40,575 p=31191 u=zuul n=ansible | Thursday 22 January 2026 06:48:40 +0000 (0:00:00.035) 0:04:02.127 ****** 2026-01-22 06:48:40,575 p=31191 u=zuul n=ansible | Thursday 22 January 2026 06:48:40 +0000 (0:00:00.035) 0:04:02.126 ****** 2026-01-22 06:48:40,610 p=31191 u=zuul n=ansible | ok: [controller] => make_openstack_init_env: BMO_SETUP: false CHECKOUT_FROM_OPENSTACK_REF: 'true' KUBECONFIG: /home/zuul/.crc/machines/crc/kubeconfig NETWORK_MTU: 1500 NNCP_DNS_SERVER: 192.168.122.10 NNCP_INTERFACE: ens7 OPENSTACK_BUNDLE_IMG: 38.102.83.50:5001/openstack-k8s-operators/openstack-operator-bundle:06ba595c74a08db75e36471c34fbb9e36b766fb7 OPENSTACK_IMG: 38.102.83.50:5001/openstack-k8s-operators/openstack-operator-index:06ba595c74a08db75e36471c34fbb9e36b766fb7 OPENSTACK_K8S_BRANCH: main OUT: /home/zuul/ci-framework-data/artifacts/manifests OUTPUT_DIR: /home/zuul/ci-framework-data/artifacts/edpm PATH: /home/zuul/.crc/bin:/home/zuul/.crc/bin/oc:/home/zuul/bin:/home/zuul/.local/bin:/home/zuul/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin WATCHER_BRANCH: '' WATCHER_IMG: 38.102.83.50:5001/openstack-k8s-operators/watcher-operator-index:2bc4688cca96552e6b25883a5eb5cc7a0447d6d9 WATCHER_REPO: /home/zuul/src/github.com/openstack-k8s-operators/watcher-operator 2026-01-22 06:48:40,620 p=31191 u=zuul n=ansible | TASK [install_yamls_makes : Debug make_openstack_init_params var=make_openstack_init_params] *** 2026-01-22 06:48:40,620 p=31191 u=zuul n=ansible | Thursday 22 January 2026 06:48:40 +0000 (0:00:00.044) 0:04:02.172 ****** 2026-01-22 06:48:40,620 p=31191 u=zuul n=ansible | Thursday 22 January 2026 06:48:40 +0000 (0:00:00.044) 0:04:02.170 ****** 2026-01-22 06:48:40,644 p=31191 u=zuul n=ansible | skipping: [controller] 2026-01-22 06:48:40,654 p=31191 u=zuul n=ansible | TASK [install_yamls_makes : Run openstack_init output_dir={{ cifmw_basedir }}/artifacts, chdir=/home/zuul/src/github.com/openstack-k8s-operators/install_yamls, script=make openstack_init, dry_run={{ make_openstack_init_dryrun|default(false)|bool }}, extra_args={{ dict((make_openstack_init_env|default({})), **(make_openstack_init_params|default({}))) }}] *** 2026-01-22 06:48:40,654 p=31191 u=zuul n=ansible | Thursday 22 January 2026 06:48:40 +0000 (0:00:00.034) 0:04:02.206 ****** 2026-01-22 06:48:40,655 p=31191 u=zuul n=ansible | Thursday 22 January 2026 06:48:40 +0000 (0:00:00.034) 0:04:02.205 ****** 2026-01-22 06:48:40,794 p=31191 u=zuul n=ansible | Follow script's output here: /home/zuul/ci-framework-data/logs/ci_script_003_run_openstack.log 2026-01-22 06:54:20,169 p=31191 u=zuul n=ansible | [WARNING]: conditional statements should not include jinja2 templating delimiters such as {{ }} or {% %}. Found: {{ make_openstack_init_until | default(true) }} 2026-01-22 06:54:20,173 p=31191 u=zuul n=ansible | changed: [controller] 2026-01-22 06:54:20,188 p=31191 u=zuul n=ansible | TASK [Run hooks after installing openstack name=run_hook] ********************** 2026-01-22 06:54:20,188 p=31191 u=zuul n=ansible | Thursday 22 January 2026 06:54:20 +0000 (0:05:39.533) 0:09:41.740 ****** 2026-01-22 06:54:20,188 p=31191 u=zuul n=ansible | Thursday 22 January 2026 06:54:20 +0000 (0:05:39.533) 0:09:41.738 ****** 2026-01-22 06:54:20,225 p=31191 u=zuul n=ansible | TASK [run_hook : Assert parameters are valid quiet=True, that=['_list_hooks is not string', '_list_hooks is not mapping', '_list_hooks is iterable', '(hooks | default([])) is not string', '(hooks | default([])) is not mapping', '(hooks | default([])) is iterable']] *** 2026-01-22 06:54:20,226 p=31191 u=zuul n=ansible | Thursday 22 January 2026 06:54:20 +0000 (0:00:00.037) 0:09:41.777 ****** 2026-01-22 06:54:20,226 p=31191 u=zuul n=ansible | Thursday 22 January 2026 06:54:20 +0000 (0:00:00.037) 0:09:41.776 ****** 2026-01-22 06:54:20,316 p=31191 u=zuul n=ansible | ok: [controller] 2026-01-22 06:54:20,326 p=31191 u=zuul n=ansible | TASK [run_hook : Assert single hooks are all mappings quiet=True, that=['_not_mapping_hooks | length == 0'], msg=All single hooks must be a list of mappings or a mapping.] *** 2026-01-22 06:54:20,326 p=31191 u=zuul n=ansible | Thursday 22 January 2026 06:54:20 +0000 (0:00:00.100) 0:09:41.878 ****** 2026-01-22 06:54:20,326 p=31191 u=zuul n=ansible | Thursday 22 January 2026 06:54:20 +0000 (0:00:00.100) 0:09:41.876 ****** 2026-01-22 06:54:20,424 p=31191 u=zuul n=ansible | ok: [controller] 2026-01-22 06:54:20,443 p=31191 u=zuul n=ansible | TASK [run_hook : Loop on hooks for post_install_operators_kuttl_from_operator _raw_params={{ hook.type }}.yml] *** 2026-01-22 06:54:20,443 p=31191 u=zuul n=ansible | Thursday 22 January 2026 06:54:20 +0000 (0:00:00.117) 0:09:41.995 ****** 2026-01-22 06:54:20,444 p=31191 u=zuul n=ansible | Thursday 22 January 2026 06:54:20 +0000 (0:00:00.117) 0:09:41.994 ****** 2026-01-22 06:54:20,587 p=31191 u=zuul n=ansible | included: /home/zuul/src/github.com/openstack-k8s-operators/ci-framework/roles/run_hook/tasks/playbook.yml for controller => (item={'name': 'Deploy watcher operator', 'type': 'playbook', 'source': '/home/zuul/src/github.com/openstack-k8s-operators/watcher-operator/ci/playbooks/deploy_watcher_operator.yaml', 'extra_vars': {'content_provider_os_registry_url': '38.102.83.50:5001/podified-master-centos10', 'watcher_catalog_image': '38.102.83.50:5001/openstack-k8s-operators/watcher-operator-index:2bc4688cca96552e6b25883a5eb5cc7a0447d6d9', 'watcher_services_tag': 'watcher_latest', 'watcher_repo': '/home/zuul/src/github.com/openstack-k8s-operators/watcher-operator'}}) 2026-01-22 06:54:20,602 p=31191 u=zuul n=ansible | TASK [run_hook : Set playbook path for Deploy watcher operator cifmw_basedir={{ _bdir }}, hook_name={{ _hook_name }}, playbook_path={{ _play | realpath }}, log_path={{ _bdir }}/logs/{{ step }}_{{ _hook_name }}.log, extra_vars=-e namespace={{ cifmw_openstack_namespace }} {%- if hook.extra_vars is defined and hook.extra_vars|length > 0 -%} {% for key,value in hook.extra_vars.items() -%} {%- if key == 'file' %} -e "@{{ value }}" {%- else %} -e "{{ key }}={{ value }}" {%- endif %} {%- endfor %} {%- endif %}] *** 2026-01-22 06:54:20,603 p=31191 u=zuul n=ansible | Thursday 22 January 2026 06:54:20 +0000 (0:00:00.159) 0:09:42.155 ****** 2026-01-22 06:54:20,603 p=31191 u=zuul n=ansible | Thursday 22 January 2026 06:54:20 +0000 (0:00:00.159) 0:09:42.153 ****** 2026-01-22 06:54:20,653 p=31191 u=zuul n=ansible | ok: [controller] 2026-01-22 06:54:20,665 p=31191 u=zuul n=ansible | TASK [run_hook : Get file stat path={{ playbook_path }}] *********************** 2026-01-22 06:54:20,666 p=31191 u=zuul n=ansible | Thursday 22 January 2026 06:54:20 +0000 (0:00:00.062) 0:09:42.217 ****** 2026-01-22 06:54:20,666 p=31191 u=zuul n=ansible | Thursday 22 January 2026 06:54:20 +0000 (0:00:00.062) 0:09:42.216 ****** 2026-01-22 06:54:20,930 p=31191 u=zuul n=ansible | ok: [controller] 2026-01-22 06:54:20,936 p=31191 u=zuul n=ansible | TASK [run_hook : Fail if playbook doesn't exist msg=Playbook {{ playbook_path }} doesn't seem to exist.] *** 2026-01-22 06:54:20,936 p=31191 u=zuul n=ansible | Thursday 22 January 2026 06:54:20 +0000 (0:00:00.270) 0:09:42.488 ****** 2026-01-22 06:54:20,936 p=31191 u=zuul n=ansible | Thursday 22 January 2026 06:54:20 +0000 (0:00:00.270) 0:09:42.487 ****** 2026-01-22 06:54:20,952 p=31191 u=zuul n=ansible | skipping: [controller] 2026-01-22 06:54:20,960 p=31191 u=zuul n=ansible | TASK [run_hook : Get parameters files paths={{ (cifmw_basedir, 'artifacts/parameters') | path_join }}, file_type=file, patterns=*.yml] *** 2026-01-22 06:54:20,960 p=31191 u=zuul n=ansible | Thursday 22 January 2026 06:54:20 +0000 (0:00:00.023) 0:09:42.512 ****** 2026-01-22 06:54:20,960 p=31191 u=zuul n=ansible | Thursday 22 January 2026 06:54:20 +0000 (0:00:00.023) 0:09:42.511 ****** 2026-01-22 06:54:21,178 p=31191 u=zuul n=ansible | ok: [controller] 2026-01-22 06:54:21,186 p=31191 u=zuul n=ansible | TASK [run_hook : Add parameters artifacts as extra variables extra_vars={{ extra_vars }} {% for file in cifmw_run_hook_parameters_files.files %} -e "@{{ file.path }}" {%- endfor %}] *** 2026-01-22 06:54:21,186 p=31191 u=zuul n=ansible | Thursday 22 January 2026 06:54:21 +0000 (0:00:00.225) 0:09:42.737 ****** 2026-01-22 06:54:21,186 p=31191 u=zuul n=ansible | Thursday 22 January 2026 06:54:21 +0000 (0:00:00.225) 0:09:42.736 ****** 2026-01-22 06:54:21,224 p=31191 u=zuul n=ansible | ok: [controller] 2026-01-22 06:54:21,236 p=31191 u=zuul n=ansible | TASK [run_hook : Ensure log directory exists path={{ log_path | dirname }}, state=directory, mode=0755] *** 2026-01-22 06:54:21,237 p=31191 u=zuul n=ansible | Thursday 22 January 2026 06:54:21 +0000 (0:00:00.050) 0:09:42.788 ****** 2026-01-22 06:54:21,237 p=31191 u=zuul n=ansible | Thursday 22 January 2026 06:54:21 +0000 (0:00:00.051) 0:09:42.787 ****** 2026-01-22 06:54:21,470 p=31191 u=zuul n=ansible | ok: [controller] 2026-01-22 06:54:21,484 p=31191 u=zuul n=ansible | TASK [run_hook : Ensure artifacts directory exists path={{ cifmw_basedir }}/artifacts, state=directory, mode=0755] *** 2026-01-22 06:54:21,484 p=31191 u=zuul n=ansible | Thursday 22 January 2026 06:54:21 +0000 (0:00:00.247) 0:09:43.036 ****** 2026-01-22 06:54:21,485 p=31191 u=zuul n=ansible | Thursday 22 January 2026 06:54:21 +0000 (0:00:00.247) 0:09:43.035 ****** 2026-01-22 06:54:21,709 p=31191 u=zuul n=ansible | ok: [controller] 2026-01-22 06:54:21,721 p=31191 u=zuul n=ansible | TASK [run_hook : Run hook without retry - Deploy watcher operator] ************* 2026-01-22 06:54:21,722 p=31191 u=zuul n=ansible | Thursday 22 January 2026 06:54:21 +0000 (0:00:00.237) 0:09:43.274 ****** 2026-01-22 06:54:21,722 p=31191 u=zuul n=ansible | Thursday 22 January 2026 06:54:21 +0000 (0:00:00.237) 0:09:43.272 ****** 2026-01-22 06:54:21,886 p=31191 u=zuul n=ansible | Follow script's output here: /home/zuul/ci-framework-data/logs/ci_script_004_run_hook_without_retry_deploy.log 2026-01-22 06:55:18,446 p=31191 u=zuul n=ansible | changed: [controller] 2026-01-22 06:55:18,453 p=31191 u=zuul n=ansible | TASK [run_hook : Run hook with retry - Deploy watcher operator] **************** 2026-01-22 06:55:18,454 p=31191 u=zuul n=ansible | Thursday 22 January 2026 06:55:18 +0000 (0:00:56.731) 0:10:40.005 ****** 2026-01-22 06:55:18,454 p=31191 u=zuul n=ansible | Thursday 22 January 2026 06:55:18 +0000 (0:00:56.731) 0:10:40.004 ****** 2026-01-22 06:55:18,470 p=31191 u=zuul n=ansible | skipping: [controller] 2026-01-22 06:55:18,478 p=31191 u=zuul n=ansible | TASK [run_hook : Check if we have a file path={{ cifmw_basedir }}/artifacts/{{ step }}_{{ hook_name }}.yml] *** 2026-01-22 06:55:18,478 p=31191 u=zuul n=ansible | Thursday 22 January 2026 06:55:18 +0000 (0:00:00.024) 0:10:40.030 ****** 2026-01-22 06:55:18,479 p=31191 u=zuul n=ansible | Thursday 22 January 2026 06:55:18 +0000 (0:00:00.024) 0:10:40.029 ****** 2026-01-22 06:55:18,681 p=31191 u=zuul n=ansible | ok: [controller] 2026-01-22 06:55:18,688 p=31191 u=zuul n=ansible | TASK [run_hook : Load generated content in main playbook file={{ cifmw_basedir }}/artifacts/{{ step }}_{{ hook_name }}.yml] *** 2026-01-22 06:55:18,688 p=31191 u=zuul n=ansible | Thursday 22 January 2026 06:55:18 +0000 (0:00:00.209) 0:10:40.240 ****** 2026-01-22 06:55:18,688 p=31191 u=zuul n=ansible | Thursday 22 January 2026 06:55:18 +0000 (0:00:00.209) 0:10:40.239 ****** 2026-01-22 06:55:18,706 p=31191 u=zuul n=ansible | skipping: [controller] 2026-01-22 06:55:18,719 p=31191 u=zuul n=ansible | TASK [install kuttl test_suite dependencies chdir={{ ansible_user_dir }}/{{ operator_basedir }}, _raw_params=make kuttl-test-prep] *** 2026-01-22 06:55:18,719 p=31191 u=zuul n=ansible | Thursday 22 January 2026 06:55:18 +0000 (0:00:00.030) 0:10:40.271 ****** 2026-01-22 06:55:18,719 p=31191 u=zuul n=ansible | Thursday 22 January 2026 06:55:18 +0000 (0:00:00.030) 0:10:40.269 ****** 2026-01-22 06:58:13,848 p=31191 u=zuul n=ansible | changed: [controller] 2026-01-22 06:58:13,881 p=31191 u=zuul n=ansible | PLAY RECAP ********************************************************************* 2026-01-22 06:58:13,881 p=31191 u=zuul n=ansible | controller : ok=114 changed=41 unreachable=0 failed=0 skipped=79 rescued=0 ignored=1 2026-01-22 06:58:13,881 p=31191 u=zuul n=ansible | Thursday 22 January 2026 06:58:13 +0000 (0:02:55.162) 0:13:35.433 ****** 2026-01-22 06:58:13,881 p=31191 u=zuul n=ansible | =============================================================================== 2026-01-22 06:58:13,881 p=31191 u=zuul n=ansible | install_yamls_makes : Run openstack_init ------------------------------ 339.53s 2026-01-22 06:58:13,882 p=31191 u=zuul n=ansible | install kuttl test_suite dependencies --------------------------------- 175.16s 2026-01-22 06:58:13,882 p=31191 u=zuul n=ansible | openshift_obs : Wait for observability operator deployment ------------- 71.15s 2026-01-22 06:58:13,882 p=31191 u=zuul n=ansible | install_yamls_makes : Run download_tools ------------------------------- 57.97s 2026-01-22 06:58:13,882 p=31191 u=zuul n=ansible | run_hook : Run hook without retry - Deploy watcher operator ------------ 56.73s 2026-01-22 06:58:13,882 p=31191 u=zuul n=ansible | run_hook : Run hook without retry - Download needed tools -------------- 30.31s 2026-01-22 06:58:13,882 p=31191 u=zuul n=ansible | ci_setup : Install needed packages ------------------------------------- 28.92s 2026-01-22 06:58:13,882 p=31191 u=zuul n=ansible | repo_setup : Initialize python venv and install requirements ------------ 8.69s 2026-01-22 06:58:13,882 p=31191 u=zuul n=ansible | ci_setup : Install openshift client ------------------------------------- 5.77s 2026-01-22 06:58:13,882 p=31191 u=zuul n=ansible | openshift_setup : Create required namespaces ---------------------------- 2.04s 2026-01-22 06:58:13,882 p=31191 u=zuul n=ansible | install_ca : Update ca bundle ------------------------------------------- 1.49s 2026-01-22 06:58:13,882 p=31191 u=zuul n=ansible | ci_setup : Manage directories ------------------------------------------- 1.35s 2026-01-22 06:58:13,882 p=31191 u=zuul n=ansible | Gathering Facts --------------------------------------------------------- 1.32s 2026-01-22 06:58:13,882 p=31191 u=zuul n=ansible | openshift_setup : Gather network.operator info -------------------------- 1.09s 2026-01-22 06:58:13,882 p=31191 u=zuul n=ansible | repo_setup : Get repo-setup repository ---------------------------------- 1.06s 2026-01-22 06:58:13,882 p=31191 u=zuul n=ansible | repo_setup : Dump full hash in delorean.repo.md5 file ------------------- 1.04s 2026-01-22 06:58:13,882 p=31191 u=zuul n=ansible | openshift_setup : Patch network operator -------------------------------- 1.02s 2026-01-22 06:58:13,882 p=31191 u=zuul n=ansible | repo_setup : Make sure git-core package is installed -------------------- 1.02s 2026-01-22 06:58:13,882 p=31191 u=zuul n=ansible | openshift_obs : Install cluster observability operator. ----------------- 0.95s 2026-01-22 06:58:13,882 p=31191 u=zuul n=ansible | repo_setup : Install repo-setup package --------------------------------- 0.92s 2026-01-22 06:58:13,882 p=31191 u=zuul n=ansible | Thursday 22 January 2026 06:58:13 +0000 (0:02:55.163) 0:13:35.433 ****** 2026-01-22 06:58:13,882 p=31191 u=zuul n=ansible | =============================================================================== 2026-01-22 06:58:13,882 p=31191 u=zuul n=ansible | install_yamls_makes --------------------------------------------------- 397.65s 2026-01-22 06:58:13,882 p=31191 u=zuul n=ansible | ansible.builtin.command ----------------------------------------------- 175.53s 2026-01-22 06:58:13,882 p=31191 u=zuul n=ansible | run_hook --------------------------------------------------------------- 90.85s 2026-01-22 06:58:13,883 p=31191 u=zuul n=ansible | openshift_obs ---------------------------------------------------------- 72.99s 2026-01-22 06:58:13,883 p=31191 u=zuul n=ansible | ci_setup --------------------------------------------------------------- 37.57s 2026-01-22 06:58:13,883 p=31191 u=zuul n=ansible | repo_setup ------------------------------------------------------------- 18.71s 2026-01-22 06:58:13,883 p=31191 u=zuul n=ansible | openshift_setup --------------------------------------------------------- 5.74s 2026-01-22 06:58:13,883 p=31191 u=zuul n=ansible | openshift_login --------------------------------------------------------- 5.23s 2026-01-22 06:58:13,883 p=31191 u=zuul n=ansible | install_yamls ----------------------------------------------------------- 4.34s 2026-01-22 06:58:13,883 p=31191 u=zuul n=ansible | install_ca -------------------------------------------------------------- 1.93s 2026-01-22 06:58:13,883 p=31191 u=zuul n=ansible | cifmw_setup ------------------------------------------------------------- 1.71s 2026-01-22 06:58:13,883 p=31191 u=zuul n=ansible | gather_facts ------------------------------------------------------------ 1.32s 2026-01-22 06:58:13,883 p=31191 u=zuul n=ansible | networking_mapper ------------------------------------------------------- 0.85s 2026-01-22 06:58:13,883 p=31191 u=zuul n=ansible | discover_latest_image --------------------------------------------------- 0.55s 2026-01-22 06:58:13,883 p=31191 u=zuul n=ansible | ansible.builtin.set_fact ------------------------------------------------ 0.14s 2026-01-22 06:58:13,883 p=31191 u=zuul n=ansible | ansible.builtin.include_vars -------------------------------------------- 0.12s 2026-01-22 06:58:13,883 p=31191 u=zuul n=ansible | ansible.builtin.include_role -------------------------------------------- 0.11s 2026-01-22 06:58:13,883 p=31191 u=zuul n=ansible | ansible.builtin.assert -------------------------------------------------- 0.07s 2026-01-22 06:58:13,883 p=31191 u=zuul n=ansible | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2026-01-22 06:58:13,883 p=31191 u=zuul n=ansible | total ----------------------------------------------------------------- 815.38s 2026-01-22 06:58:16,043 p=40075 u=zuul n=ansible | PLAY [controller] ************************************************************** 2026-01-22 06:58:16,072 p=40075 u=zuul n=ansible | TASK [Run hooks before running kuttl tests name=run_hook] ********************** 2026-01-22 06:58:16,073 p=40075 u=zuul n=ansible | Thursday 22 January 2026 06:58:16 +0000 (0:00:00.045) 0:00:00.045 ****** 2026-01-22 06:58:16,073 p=40075 u=zuul n=ansible | Thursday 22 January 2026 06:58:16 +0000 (0:00:00.043) 0:00:00.043 ****** 2026-01-22 06:58:16,111 p=40075 u=zuul n=ansible | TASK [run_hook : Assert parameters are valid quiet=True, that=['_list_hooks is not string', '_list_hooks is not mapping', '_list_hooks is iterable', '(hooks | default([])) is not string', '(hooks | default([])) is not mapping', '(hooks | default([])) is iterable']] *** 2026-01-22 06:58:16,111 p=40075 u=zuul n=ansible | Thursday 22 January 2026 06:58:16 +0000 (0:00:00.038) 0:00:00.083 ****** 2026-01-22 06:58:16,112 p=40075 u=zuul n=ansible | Thursday 22 January 2026 06:58:16 +0000 (0:00:00.038) 0:00:00.082 ****** 2026-01-22 06:58:16,169 p=40075 u=zuul n=ansible | ok: [controller] 2026-01-22 06:58:16,178 p=40075 u=zuul n=ansible | TASK [run_hook : Assert single hooks are all mappings quiet=True, that=['_not_mapping_hooks | length == 0'], msg=All single hooks must be a list of mappings or a mapping.] *** 2026-01-22 06:58:16,178 p=40075 u=zuul n=ansible | Thursday 22 January 2026 06:58:16 +0000 (0:00:00.066) 0:00:00.150 ****** 2026-01-22 06:58:16,178 p=40075 u=zuul n=ansible | Thursday 22 January 2026 06:58:16 +0000 (0:00:00.066) 0:00:00.148 ****** 2026-01-22 06:58:16,263 p=40075 u=zuul n=ansible | ok: [controller] 2026-01-22 06:58:16,275 p=40075 u=zuul n=ansible | TASK [run_hook : Loop on hooks for pre_kuttl_from_operator _raw_params={{ hook.type }}.yml] *** 2026-01-22 06:58:16,275 p=40075 u=zuul n=ansible | Thursday 22 January 2026 06:58:16 +0000 (0:00:00.096) 0:00:00.247 ****** 2026-01-22 06:58:16,275 p=40075 u=zuul n=ansible | Thursday 22 January 2026 06:58:16 +0000 (0:00:00.096) 0:00:00.245 ****** 2026-01-22 06:58:16,398 p=40075 u=zuul n=ansible | skipping: [controller] 2026-01-22 06:58:16,420 p=40075 u=zuul n=ansible | TASK [run kuttl test suite from operator Makefile chdir={{ ansible_user_dir }}/{{ operator_basedir }}, _raw_params=make kuttl-test-run] *** 2026-01-22 06:58:16,420 p=40075 u=zuul n=ansible | Thursday 22 January 2026 06:58:16 +0000 (0:00:00.145) 0:00:00.392 ****** 2026-01-22 06:58:16,420 p=40075 u=zuul n=ansible | Thursday 22 January 2026 06:58:16 +0000 (0:00:00.145) 0:00:00.391 ****** 2026-01-22 07:13:17,233 p=40075 u=zuul n=ansible | fatal: [controller]: FAILED! => changed: true cmd: - make - kuttl-test-run delta: '0:15:00.088797' end: '2026-01-22 07:13:16.892965' msg: non-zero return code rc: 2 start: '2026-01-22 06:58:16.804168' stderr: |- Warning: spec.privateKey.rotationPolicy: In cert-manager >= v1.18.0, the default value changed from `Never` to `Always`. Warning: The 'cinderBackup' field is deprecated and will be removed in a future release. Please migrate to 'cinderBackups'. make: *** [Makefile:444: kuttl-test-run] Error 1 stderr_lines: - 'Warning: spec.privateKey.rotationPolicy: In cert-manager >= v1.18.0, the default value changed from `Never` to `Always`.' - 'Warning: The ''cinderBackup'' field is deprecated and will be removed in a future release. Please migrate to ''cinderBackups''.' - 'make: *** [Makefile:444: kuttl-test-run] Error 1' stdout: "oc kuttl test --v 1 --start-kind=false --config test/kuttl/test-suites/default/config.yaml\n=== RUN kuttl\n harness.go:463: starting setup\n harness.go:255: running tests using configured kubeconfig.\n harness.go:278: Successful connection to cluster at: https://api.crc.testing:6443\n harness.go:363: running tests\n harness.go:75: going to run test suite with timeout of 300 seconds for each step\n harness.go:375: testsuite: test/kuttl/test-suites/default/ has 10 tests\n=== RUN kuttl/harness\n=== RUN kuttl/harness/common\n=== PAUSE kuttl/harness/common\n=== RUN kuttl/harness/deps\n=== PAUSE kuttl/harness/deps\n=== RUN kuttl/harness/watcher\n=== PAUSE kuttl/harness/watcher\n=== RUN kuttl/harness/watcher-api-scaling\n=== PAUSE kuttl/harness/watcher-api-scaling\n=== RUN kuttl/harness/watcher-cinder\n=== PAUSE kuttl/harness/watcher-cinder\n=== RUN kuttl/harness/watcher-notification\n=== PAUSE kuttl/harness/watcher-notification\n=== RUN kuttl/harness/watcher-rmquser\n=== PAUSE kuttl/harness/watcher-rmquser\n=== RUN kuttl/harness/watcher-tls\n=== PAUSE kuttl/harness/watcher-tls\n=== RUN kuttl/harness/watcher-tls-certs-change\n=== PAUSE kuttl/harness/watcher-tls-certs-change\n=== RUN kuttl/harness/watcher-topology\n=== PAUSE kuttl/harness/watcher-topology\n=== CONT kuttl/harness/common\n logger.go:42: 06:58:17 | common | Ignoring cleanup-assert.yaml as it does not match file name regexp: ^(\\d+)-(?:[^\\.]+)(?:\\.yaml)?$\n logger.go:42: 06:58:17 | common | Ignoring cleanup-errors.yaml as it does not match file name regexp: ^(\\d+)-(?:[^\\.]+)(?:\\.yaml)?$\n \ logger.go:42: 06:58:17 | common | Ignoring cleanup-watcher.yaml as it does not match file name regexp: ^(\\d+)-(?:[^\\.]+)(?:\\.yaml)?$\n logger.go:42: 06:58:17 | common | Ignoring deploy-with-defaults.yaml as it does not match file name regexp: ^(\\d+)-(?:[^\\.]+)(?:\\.yaml)?$\n logger.go:42: 06:58:17 | common | Skipping creation of user-supplied namespace: watcher-kuttl-default\n logger.go:42: 06:58:17 | common | skipping kubernetes event logging\n=== CONT kuttl/harness/watcher-notification\n \ logger.go:42: 06:58:17 | watcher-notification | Skipping creation of user-supplied namespace: watcher-kuttl-default\n logger.go:42: 06:58:17 | watcher-notification/0-cleanup-watcher | starting test step 0-cleanup-watcher\n logger.go:42: 06:58:17 | watcher-notification/0-cleanup-watcher | test step completed 0-cleanup-watcher\n logger.go:42: 06:58:17 | watcher-notification/1-deploy-with-notification | starting test step 1-deploy-with-notification\n logger.go:42: 06:58:17 | watcher-notification/1-deploy-with-notification | Watcher:watcher-kuttl-default/watcher-kuttl created\n logger.go:42: 06:58:17 | watcher-notification/1-deploy-with-notification | running command: [sh -c set -euxo pipefail\n oc project watcher-kuttl-default\n APIPOD=$(oc get pods -n watcher-kuttl-default -l \"service=watcher-api\" -ocustom-columns=:metadata.name|grep -v ^$|head -1)\n if [ -n \"${APIPOD}\" ]; then\n [ $(echo $(oc rsh -c watcher-api ${APIPOD} cat /etc/watcher/watcher.conf.d/00-default.conf) |grep -czPo '\\[oslo_messaging_notifications\\]\\s+driver\\s+=\\s+messagingv2\\s+transport_url\\s+=') == 1 ]\n else\n exit 1\n fi\n ]\n logger.go:42: 06:58:17 | watcher-notification/1-deploy-with-notification | + oc project watcher-kuttl-default\n \ logger.go:42: 06:58:17 | watcher-notification/1-deploy-with-notification | Now using project \"watcher-kuttl-default\" on server \"https://api.crc.testing:6443\".\n \ logger.go:42: 06:58:17 | watcher-notification/1-deploy-with-notification | ++ grep -v '^$'\n logger.go:42: 06:58:17 | watcher-notification/1-deploy-with-notification | ++ head -1\n logger.go:42: 06:58:17 | watcher-notification/1-deploy-with-notification | ++ oc get pods -n watcher-kuttl-default -l service=watcher-api -ocustom-columns=:metadata.name\n \ logger.go:42: 06:58:17 | watcher-notification/1-deploy-with-notification | + APIPOD=\n logger.go:42: 06:58:18 | watcher-notification/1-deploy-with-notification | running command: [sh -c set -euxo pipefail\n oc project watcher-kuttl-default\n \ APIPOD=$(oc get pods -n watcher-kuttl-default -l \"service=watcher-api\" -ocustom-columns=:metadata.name|grep -v ^$|head -1)\n if [ -n \"${APIPOD}\" ]; then\n [ $(echo $(oc rsh -c watcher-api ${APIPOD} cat /etc/watcher/watcher.conf.d/00-default.conf) |grep -czPo '\\[oslo_messaging_notifications\\]\\s+driver\\s+=\\s+messagingv2\\s+transport_url\\s+=') == 1 ]\n else\n exit 1\n fi\n ]\n logger.go:42: 06:58:18 | watcher-notification/1-deploy-with-notification | + oc project watcher-kuttl-default\n \ logger.go:42: 06:58:19 | watcher-notification/1-deploy-with-notification | Already on project \"watcher-kuttl-default\" on server \"https://api.crc.testing:6443\".\n \ logger.go:42: 06:58:19 | watcher-notification/1-deploy-with-notification | ++ grep -v '^$'\n logger.go:42: 06:58:19 | watcher-notification/1-deploy-with-notification | ++ head -1\n logger.go:42: 06:58:19 | watcher-notification/1-deploy-with-notification | ++ oc get pods -n watcher-kuttl-default -l service=watcher-api -ocustom-columns=:metadata.name\n \ logger.go:42: 06:58:19 | watcher-notification/1-deploy-with-notification | + APIPOD=\n logger.go:42: 06:58:20 | watcher-notification/1-deploy-with-notification | running command: [sh -c set -euxo pipefail\n oc project watcher-kuttl-default\n \ APIPOD=$(oc get pods -n watcher-kuttl-default -l \"service=watcher-api\" -ocustom-columns=:metadata.name|grep -v ^$|head -1)\n if [ -n \"${APIPOD}\" ]; then\n [ $(echo $(oc rsh -c watcher-api ${APIPOD} cat /etc/watcher/watcher.conf.d/00-default.conf) |grep -czPo '\\[oslo_messaging_notifications\\]\\s+driver\\s+=\\s+messagingv2\\s+transport_url\\s+=') == 1 ]\n else\n exit 1\n fi\n ]\n logger.go:42: 06:58:20 | watcher-notification/1-deploy-with-notification | + oc project watcher-kuttl-default\n \ logger.go:42: 06:58:20 | watcher-notification/1-deploy-with-notification | Already on project \"watcher-kuttl-default\" on server \"https://api.crc.testing:6443\".\n \ logger.go:42: 06:58:20 | watcher-notification/1-deploy-with-notification | ++ grep -v '^$'\n logger.go:42: 06:58:20 | watcher-notification/1-deploy-with-notification | ++ head -1\n logger.go:42: 06:58:20 | watcher-notification/1-deploy-with-notification | ++ oc get pods -n watcher-kuttl-default -l service=watcher-api -ocustom-columns=:metadata.name\n \ logger.go:42: 06:58:20 | watcher-notification/1-deploy-with-notification | + APIPOD=\n logger.go:42: 06:58:21 | watcher-notification/1-deploy-with-notification | running command: [sh -c set -euxo pipefail\n oc project watcher-kuttl-default\n \ APIPOD=$(oc get pods -n watcher-kuttl-default -l \"service=watcher-api\" -ocustom-columns=:metadata.name|grep -v ^$|head -1)\n if [ -n \"${APIPOD}\" ]; then\n [ $(echo $(oc rsh -c watcher-api ${APIPOD} cat /etc/watcher/watcher.conf.d/00-default.conf) |grep -czPo '\\[oslo_messaging_notifications\\]\\s+driver\\s+=\\s+messagingv2\\s+transport_url\\s+=') == 1 ]\n else\n exit 1\n fi\n ]\n logger.go:42: 06:58:21 | watcher-notification/1-deploy-with-notification | + oc project watcher-kuttl-default\n \ logger.go:42: 06:58:21 | watcher-notification/1-deploy-with-notification | Already on project \"watcher-kuttl-default\" on server \"https://api.crc.testing:6443\".\n \ logger.go:42: 06:58:21 | watcher-notification/1-deploy-with-notification | ++ grep -v '^$'\n logger.go:42: 06:58:21 | watcher-notification/1-deploy-with-notification | ++ oc get pods -n watcher-kuttl-default -l service=watcher-api -ocustom-columns=:metadata.name\n \ logger.go:42: 06:58:21 | watcher-notification/1-deploy-with-notification | ++ head -1\n logger.go:42: 06:58:22 | watcher-notification/1-deploy-with-notification | + APIPOD=\n logger.go:42: 06:58:23 | watcher-notification/1-deploy-with-notification | running command: [sh -c set -euxo pipefail\n oc project watcher-kuttl-default\n \ APIPOD=$(oc get pods -n watcher-kuttl-default -l \"service=watcher-api\" -ocustom-columns=:metadata.name|grep -v ^$|head -1)\n if [ -n \"${APIPOD}\" ]; then\n [ $(echo $(oc rsh -c watcher-api ${APIPOD} cat /etc/watcher/watcher.conf.d/00-default.conf) |grep -czPo '\\[oslo_messaging_notifications\\]\\s+driver\\s+=\\s+messagingv2\\s+transport_url\\s+=') == 1 ]\n else\n exit 1\n fi\n ]\n logger.go:42: 06:58:23 | watcher-notification/1-deploy-with-notification | + oc project watcher-kuttl-default\n \ logger.go:42: 06:58:23 | watcher-notification/1-deploy-with-notification | Already on project \"watcher-kuttl-default\" on server \"https://api.crc.testing:6443\".\n \ logger.go:42: 06:58:23 | watcher-notification/1-deploy-with-notification | ++ grep -v '^$'\n logger.go:42: 06:58:23 | watcher-notification/1-deploy-with-notification | ++ head -1\n logger.go:42: 06:58:23 | watcher-notification/1-deploy-with-notification | ++ oc get pods -n watcher-kuttl-default -l service=watcher-api -ocustom-columns=:metadata.name\n \ logger.go:42: 06:58:23 | watcher-notification/1-deploy-with-notification | + APIPOD=\n logger.go:42: 06:58:24 | watcher-notification/1-deploy-with-notification | running command: [sh -c set -euxo pipefail\n oc project watcher-kuttl-default\n \ APIPOD=$(oc get pods -n watcher-kuttl-default -l \"service=watcher-api\" -ocustom-columns=:metadata.name|grep -v ^$|head -1)\n if [ -n \"${APIPOD}\" ]; then\n [ $(echo $(oc rsh -c watcher-api ${APIPOD} cat /etc/watcher/watcher.conf.d/00-default.conf) |grep -czPo '\\[oslo_messaging_notifications\\]\\s+driver\\s+=\\s+messagingv2\\s+transport_url\\s+=') == 1 ]\n else\n exit 1\n fi\n ]\n logger.go:42: 06:58:24 | watcher-notification/1-deploy-with-notification | + oc project watcher-kuttl-default\n \ logger.go:42: 06:58:24 | watcher-notification/1-deploy-with-notification | Already on project \"watcher-kuttl-default\" on server \"https://api.crc.testing:6443\".\n \ logger.go:42: 06:58:24 | watcher-notification/1-deploy-with-notification | ++ oc get pods -n watcher-kuttl-default -l service=watcher-api -ocustom-columns=:metadata.name\n \ logger.go:42: 06:58:24 | watcher-notification/1-deploy-with-notification | ++ head -1\n logger.go:42: 06:58:24 | watcher-notification/1-deploy-with-notification | ++ grep -v '^$'\n logger.go:42: 06:58:25 | watcher-notification/1-deploy-with-notification | + APIPOD=\n logger.go:42: 06:58:26 | watcher-notification/1-deploy-with-notification | running command: [sh -c set -euxo pipefail\n oc project watcher-kuttl-default\n \ APIPOD=$(oc get pods -n watcher-kuttl-default -l \"service=watcher-api\" -ocustom-columns=:metadata.name|grep -v ^$|head -1)\n if [ -n \"${APIPOD}\" ]; then\n [ $(echo $(oc rsh -c watcher-api ${APIPOD} cat /etc/watcher/watcher.conf.d/00-default.conf) |grep -czPo '\\[oslo_messaging_notifications\\]\\s+driver\\s+=\\s+messagingv2\\s+transport_url\\s+=') == 1 ]\n else\n exit 1\n fi\n ]\n logger.go:42: 06:58:26 | watcher-notification/1-deploy-with-notification | + oc project watcher-kuttl-default\n \ logger.go:42: 06:58:26 | watcher-notification/1-deploy-with-notification | Already on project \"watcher-kuttl-default\" on server \"https://api.crc.testing:6443\".\n \ logger.go:42: 06:58:26 | watcher-notification/1-deploy-with-notification | ++ oc get pods -n watcher-kuttl-default -l service=watcher-api -ocustom-columns=:metadata.name\n \ logger.go:42: 06:58:26 | watcher-notification/1-deploy-with-notification | ++ head -1\n logger.go:42: 06:58:26 | watcher-notification/1-deploy-with-notification | ++ grep -v '^$'\n logger.go:42: 06:58:26 | watcher-notification/1-deploy-with-notification | + APIPOD=\n logger.go:42: 06:58:27 | watcher-notification/1-deploy-with-notification | running command: [sh -c set -euxo pipefail\n oc project watcher-kuttl-default\n \ APIPOD=$(oc get pods -n watcher-kuttl-default -l \"service=watcher-api\" -ocustom-columns=:metadata.name|grep -v ^$|head -1)\n if [ -n \"${APIPOD}\" ]; then\n [ $(echo $(oc rsh -c watcher-api ${APIPOD} cat /etc/watcher/watcher.conf.d/00-default.conf) |grep -czPo '\\[oslo_messaging_notifications\\]\\s+driver\\s+=\\s+messagingv2\\s+transport_url\\s+=') == 1 ]\n else\n exit 1\n fi\n ]\n logger.go:42: 06:58:27 | watcher-notification/1-deploy-with-notification | + oc project watcher-kuttl-default\n \ logger.go:42: 06:58:27 | watcher-notification/1-deploy-with-notification | Already on project \"watcher-kuttl-default\" on server \"https://api.crc.testing:6443\".\n \ logger.go:42: 06:58:27 | watcher-notification/1-deploy-with-notification | ++ oc get pods -n watcher-kuttl-default -l service=watcher-api -ocustom-columns=:metadata.name\n \ logger.go:42: 06:58:27 | watcher-notification/1-deploy-with-notification | ++ head -1\n logger.go:42: 06:58:27 | watcher-notification/1-deploy-with-notification | ++ grep -v '^$'\n logger.go:42: 06:58:27 | watcher-notification/1-deploy-with-notification | + APIPOD=\n logger.go:42: 06:58:29 | watcher-notification/1-deploy-with-notification | running command: [sh -c set -euxo pipefail\n oc project watcher-kuttl-default\n \ APIPOD=$(oc get pods -n watcher-kuttl-default -l \"service=watcher-api\" -ocustom-columns=:metadata.name|grep -v ^$|head -1)\n if [ -n \"${APIPOD}\" ]; then\n [ $(echo $(oc rsh -c watcher-api ${APIPOD} cat /etc/watcher/watcher.conf.d/00-default.conf) |grep -czPo '\\[oslo_messaging_notifications\\]\\s+driver\\s+=\\s+messagingv2\\s+transport_url\\s+=') == 1 ]\n else\n exit 1\n fi\n ]\n logger.go:42: 06:58:29 | watcher-notification/1-deploy-with-notification | + oc project watcher-kuttl-default\n \ logger.go:42: 06:58:29 | watcher-notification/1-deploy-with-notification | Already on project \"watcher-kuttl-default\" on server \"https://api.crc.testing:6443\".\n \ logger.go:42: 06:58:29 | watcher-notification/1-deploy-with-notification | ++ oc get pods -n watcher-kuttl-default -l service=watcher-api -ocustom-columns=:metadata.name\n \ logger.go:42: 06:58:29 | watcher-notification/1-deploy-with-notification | ++ head -1\n logger.go:42: 06:58:29 | watcher-notification/1-deploy-with-notification | ++ grep -v '^$'\n logger.go:42: 06:58:29 | watcher-notification/1-deploy-with-notification | + APIPOD=\n logger.go:42: 06:58:30 | watcher-notification/1-deploy-with-notification | running command: [sh -c set -euxo pipefail\n oc project watcher-kuttl-default\n \ APIPOD=$(oc get pods -n watcher-kuttl-default -l \"service=watcher-api\" -ocustom-columns=:metadata.name|grep -v ^$|head -1)\n if [ -n \"${APIPOD}\" ]; then\n [ $(echo $(oc rsh -c watcher-api ${APIPOD} cat /etc/watcher/watcher.conf.d/00-default.conf) |grep -czPo '\\[oslo_messaging_notifications\\]\\s+driver\\s+=\\s+messagingv2\\s+transport_url\\s+=') == 1 ]\n else\n exit 1\n fi\n ]\n logger.go:42: 06:58:30 | watcher-notification/1-deploy-with-notification | + oc project watcher-kuttl-default\n \ logger.go:42: 06:58:30 | watcher-notification/1-deploy-with-notification | Already on project \"watcher-kuttl-default\" on server \"https://api.crc.testing:6443\".\n \ logger.go:42: 06:58:30 | watcher-notification/1-deploy-with-notification | ++ oc get pods -n watcher-kuttl-default -l service=watcher-api -ocustom-columns=:metadata.name\n \ logger.go:42: 06:58:30 | watcher-notification/1-deploy-with-notification | ++ head -1\n logger.go:42: 06:58:30 | watcher-notification/1-deploy-with-notification | ++ grep -v '^$'\n logger.go:42: 06:58:30 | watcher-notification/1-deploy-with-notification | + APIPOD=\n logger.go:42: 06:58:31 | watcher-notification/1-deploy-with-notification | running command: [sh -c set -euxo pipefail\n oc project watcher-kuttl-default\n \ APIPOD=$(oc get pods -n watcher-kuttl-default -l \"service=watcher-api\" -ocustom-columns=:metadata.name|grep -v ^$|head -1)\n if [ -n \"${APIPOD}\" ]; then\n [ $(echo $(oc rsh -c watcher-api ${APIPOD} cat /etc/watcher/watcher.conf.d/00-default.conf) |grep -czPo '\\[oslo_messaging_notifications\\]\\s+driver\\s+=\\s+messagingv2\\s+transport_url\\s+=') == 1 ]\n else\n exit 1\n fi\n ]\n logger.go:42: 06:58:31 | watcher-notification/1-deploy-with-notification | + oc project watcher-kuttl-default\n \ logger.go:42: 06:58:31 | watcher-notification/1-deploy-with-notification | Already on project \"watcher-kuttl-default\" on server \"https://api.crc.testing:6443\".\n \ logger.go:42: 06:58:31 | watcher-notification/1-deploy-with-notification | ++ oc get pods -n watcher-kuttl-default -l service=watcher-api -ocustom-columns=:metadata.name\n \ logger.go:42: 06:58:31 | watcher-notification/1-deploy-with-notification | ++ head -1\n logger.go:42: 06:58:31 | watcher-notification/1-deploy-with-notification | ++ grep -v '^$'\n logger.go:42: 06:58:32 | watcher-notification/1-deploy-with-notification | + APIPOD=\n logger.go:42: 06:58:33 | watcher-notification/1-deploy-with-notification | running command: [sh -c set -euxo pipefail\n oc project watcher-kuttl-default\n \ APIPOD=$(oc get pods -n watcher-kuttl-default -l \"service=watcher-api\" -ocustom-columns=:metadata.name|grep -v ^$|head -1)\n if [ -n \"${APIPOD}\" ]; then\n [ $(echo $(oc rsh -c watcher-api ${APIPOD} cat /etc/watcher/watcher.conf.d/00-default.conf) |grep -czPo '\\[oslo_messaging_notifications\\]\\s+driver\\s+=\\s+messagingv2\\s+transport_url\\s+=') == 1 ]\n else\n exit 1\n fi\n ]\n logger.go:42: 06:58:33 | watcher-notification/1-deploy-with-notification | + oc project watcher-kuttl-default\n \ logger.go:42: 06:58:33 | watcher-notification/1-deploy-with-notification | Already on project \"watcher-kuttl-default\" on server \"https://api.crc.testing:6443\".\n \ logger.go:42: 06:58:33 | watcher-notification/1-deploy-with-notification | ++ oc get pods -n watcher-kuttl-default -l service=watcher-api -ocustom-columns=:metadata.name\n \ logger.go:42: 06:58:33 | watcher-notification/1-deploy-with-notification | ++ head -1\n logger.go:42: 06:58:33 | watcher-notification/1-deploy-with-notification | ++ grep -v '^$'\n logger.go:42: 06:58:33 | watcher-notification/1-deploy-with-notification | + APIPOD=\n logger.go:42: 06:58:34 | watcher-notification/1-deploy-with-notification | running command: [sh -c set -euxo pipefail\n oc project watcher-kuttl-default\n \ APIPOD=$(oc get pods -n watcher-kuttl-default -l \"service=watcher-api\" -ocustom-columns=:metadata.name|grep -v ^$|head -1)\n if [ -n \"${APIPOD}\" ]; then\n [ $(echo $(oc rsh -c watcher-api ${APIPOD} cat /etc/watcher/watcher.conf.d/00-default.conf) |grep -czPo '\\[oslo_messaging_notifications\\]\\s+driver\\s+=\\s+messagingv2\\s+transport_url\\s+=') == 1 ]\n else\n exit 1\n fi\n ]\n logger.go:42: 06:58:34 | watcher-notification/1-deploy-with-notification | + oc project watcher-kuttl-default\n \ logger.go:42: 06:58:34 | watcher-notification/1-deploy-with-notification | Already on project \"watcher-kuttl-default\" on server \"https://api.crc.testing:6443\".\n \ logger.go:42: 06:58:34 | watcher-notification/1-deploy-with-notification | ++ oc get pods -n watcher-kuttl-default -l service=watcher-api -ocustom-columns=:metadata.name\n \ logger.go:42: 06:58:34 | watcher-notification/1-deploy-with-notification | ++ head -1\n logger.go:42: 06:58:34 | watcher-notification/1-deploy-with-notification | ++ grep -v '^$'\n logger.go:42: 06:58:34 | watcher-notification/1-deploy-with-notification | + APIPOD=\n logger.go:42: 06:58:36 | watcher-notification/1-deploy-with-notification | running command: [sh -c set -euxo pipefail\n oc project watcher-kuttl-default\n \ APIPOD=$(oc get pods -n watcher-kuttl-default -l \"service=watcher-api\" -ocustom-columns=:metadata.name|grep -v ^$|head -1)\n if [ -n \"${APIPOD}\" ]; then\n [ $(echo $(oc rsh -c watcher-api ${APIPOD} cat /etc/watcher/watcher.conf.d/00-default.conf) |grep -czPo '\\[oslo_messaging_notifications\\]\\s+driver\\s+=\\s+messagingv2\\s+transport_url\\s+=') == 1 ]\n else\n exit 1\n fi\n ]\n logger.go:42: 06:58:36 | watcher-notification/1-deploy-with-notification | + oc project watcher-kuttl-default\n \ logger.go:42: 06:58:36 | watcher-notification/1-deploy-with-notification | Already on project \"watcher-kuttl-default\" on server \"https://api.crc.testing:6443\".\n \ logger.go:42: 06:58:36 | watcher-notification/1-deploy-with-notification | ++ oc get pods -n watcher-kuttl-default -l service=watcher-api -ocustom-columns=:metadata.name\n \ logger.go:42: 06:58:36 | watcher-notification/1-deploy-with-notification | ++ head -1\n logger.go:42: 06:58:36 | watcher-notification/1-deploy-with-notification | ++ grep -v '^$'\n logger.go:42: 06:58:36 | watcher-notification/1-deploy-with-notification | + APIPOD=\n logger.go:42: 06:58:37 | watcher-notification/1-deploy-with-notification | running command: [sh -c set -euxo pipefail\n oc project watcher-kuttl-default\n \ APIPOD=$(oc get pods -n watcher-kuttl-default -l \"service=watcher-api\" -ocustom-columns=:metadata.name|grep -v ^$|head -1)\n if [ -n \"${APIPOD}\" ]; then\n [ $(echo $(oc rsh -c watcher-api ${APIPOD} cat /etc/watcher/watcher.conf.d/00-default.conf) |grep -czPo '\\[oslo_messaging_notifications\\]\\s+driver\\s+=\\s+messagingv2\\s+transport_url\\s+=') == 1 ]\n else\n exit 1\n fi\n ]\n logger.go:42: 06:58:37 | watcher-notification/1-deploy-with-notification | + oc project watcher-kuttl-default\n \ logger.go:42: 06:58:37 | watcher-notification/1-deploy-with-notification | Already on project \"watcher-kuttl-default\" on server \"https://api.crc.testing:6443\".\n \ logger.go:42: 06:58:37 | watcher-notification/1-deploy-with-notification | ++ oc get pods -n watcher-kuttl-default -l service=watcher-api -ocustom-columns=:metadata.name\n \ logger.go:42: 06:58:37 | watcher-notification/1-deploy-with-notification | ++ head -1\n logger.go:42: 06:58:37 | watcher-notification/1-deploy-with-notification | ++ grep -v '^$'\n logger.go:42: 06:58:37 | watcher-notification/1-deploy-with-notification | + APIPOD=\n logger.go:42: 06:58:38 | watcher-notification/1-deploy-with-notification | running command: [sh -c set -euxo pipefail\n oc project watcher-kuttl-default\n \ APIPOD=$(oc get pods -n watcher-kuttl-default -l \"service=watcher-api\" -ocustom-columns=:metadata.name|grep -v ^$|head -1)\n if [ -n \"${APIPOD}\" ]; then\n [ $(echo $(oc rsh -c watcher-api ${APIPOD} cat /etc/watcher/watcher.conf.d/00-default.conf) |grep -czPo '\\[oslo_messaging_notifications\\]\\s+driver\\s+=\\s+messagingv2\\s+transport_url\\s+=') == 1 ]\n else\n exit 1\n fi\n ]\n logger.go:42: 06:58:38 | watcher-notification/1-deploy-with-notification | + oc project watcher-kuttl-default\n \ logger.go:42: 06:58:39 | watcher-notification/1-deploy-with-notification | Already on project \"watcher-kuttl-default\" on server \"https://api.crc.testing:6443\".\n \ logger.go:42: 06:58:39 | watcher-notification/1-deploy-with-notification | ++ oc get pods -n watcher-kuttl-default -l service=watcher-api -ocustom-columns=:metadata.name\n \ logger.go:42: 06:58:39 | watcher-notification/1-deploy-with-notification | ++ head -1\n logger.go:42: 06:58:39 | watcher-notification/1-deploy-with-notification | ++ grep -v '^$'\n logger.go:42: 06:58:39 | watcher-notification/1-deploy-with-notification | + APIPOD=\n logger.go:42: 06:58:40 | watcher-notification/1-deploy-with-notification | running command: [sh -c set -euxo pipefail\n oc project watcher-kuttl-default\n \ APIPOD=$(oc get pods -n watcher-kuttl-default -l \"service=watcher-api\" -ocustom-columns=:metadata.name|grep -v ^$|head -1)\n if [ -n \"${APIPOD}\" ]; then\n [ $(echo $(oc rsh -c watcher-api ${APIPOD} cat /etc/watcher/watcher.conf.d/00-default.conf) |grep -czPo '\\[oslo_messaging_notifications\\]\\s+driver\\s+=\\s+messagingv2\\s+transport_url\\s+=') == 1 ]\n else\n exit 1\n fi\n ]\n logger.go:42: 06:58:40 | watcher-notification/1-deploy-with-notification | + oc project watcher-kuttl-default\n \ logger.go:42: 06:58:40 | watcher-notification/1-deploy-with-notification | Already on project \"watcher-kuttl-default\" on server \"https://api.crc.testing:6443\".\n \ logger.go:42: 06:58:40 | watcher-notification/1-deploy-with-notification | ++ oc get pods -n watcher-kuttl-default -l service=watcher-api -ocustom-columns=:metadata.name\n \ logger.go:42: 06:58:40 | watcher-notification/1-deploy-with-notification | ++ head -1\n logger.go:42: 06:58:40 | watcher-notification/1-deploy-with-notification | ++ grep -v '^$'\n logger.go:42: 06:58:40 | watcher-notification/1-deploy-with-notification | + APIPOD=\n logger.go:42: 06:58:41 | watcher-notification/1-deploy-with-notification | running command: [sh -c set -euxo pipefail\n oc project watcher-kuttl-default\n \ APIPOD=$(oc get pods -n watcher-kuttl-default -l \"service=watcher-api\" -ocustom-columns=:metadata.name|grep -v ^$|head -1)\n if [ -n \"${APIPOD}\" ]; then\n [ $(echo $(oc rsh -c watcher-api ${APIPOD} cat /etc/watcher/watcher.conf.d/00-default.conf) |grep -czPo '\\[oslo_messaging_notifications\\]\\s+driver\\s+=\\s+messagingv2\\s+transport_url\\s+=') == 1 ]\n else\n exit 1\n fi\n ]\n logger.go:42: 06:58:41 | watcher-notification/1-deploy-with-notification | + oc project watcher-kuttl-default\n \ logger.go:42: 06:58:42 | watcher-notification/1-deploy-with-notification | Already on project \"watcher-kuttl-default\" on server \"https://api.crc.testing:6443\".\n \ logger.go:42: 06:58:42 | watcher-notification/1-deploy-with-notification | ++ oc get pods -n watcher-kuttl-default -l service=watcher-api -ocustom-columns=:metadata.name\n \ logger.go:42: 06:58:42 | watcher-notification/1-deploy-with-notification | ++ head -1\n logger.go:42: 06:58:42 | watcher-notification/1-deploy-with-notification | ++ grep -v '^$'\n logger.go:42: 06:58:42 | watcher-notification/1-deploy-with-notification | + APIPOD=\n logger.go:42: 06:58:43 | watcher-notification/1-deploy-with-notification | running command: [sh -c set -euxo pipefail\n oc project watcher-kuttl-default\n \ APIPOD=$(oc get pods -n watcher-kuttl-default -l \"service=watcher-api\" -ocustom-columns=:metadata.name|grep -v ^$|head -1)\n if [ -n \"${APIPOD}\" ]; then\n [ $(echo $(oc rsh -c watcher-api ${APIPOD} cat /etc/watcher/watcher.conf.d/00-default.conf) |grep -czPo '\\[oslo_messaging_notifications\\]\\s+driver\\s+=\\s+messagingv2\\s+transport_url\\s+=') == 1 ]\n else\n exit 1\n fi\n ]\n logger.go:42: 06:58:43 | watcher-notification/1-deploy-with-notification | + oc project watcher-kuttl-default\n \ logger.go:42: 06:58:43 | watcher-notification/1-deploy-with-notification | Already on project \"watcher-kuttl-default\" on server \"https://api.crc.testing:6443\".\n \ logger.go:42: 06:58:43 | watcher-notification/1-deploy-with-notification | ++ grep -v '^$'\n logger.go:42: 06:58:43 | watcher-notification/1-deploy-with-notification | ++ head -1\n logger.go:42: 06:58:43 | watcher-notification/1-deploy-with-notification | ++ oc get pods -n watcher-kuttl-default -l service=watcher-api -ocustom-columns=:metadata.name\n \ logger.go:42: 06:58:43 | watcher-notification/1-deploy-with-notification | + APIPOD=\n logger.go:42: 06:58:44 | watcher-notification/1-deploy-with-notification | running command: [sh -c set -euxo pipefail\n oc project watcher-kuttl-default\n \ APIPOD=$(oc get pods -n watcher-kuttl-default -l \"service=watcher-api\" -ocustom-columns=:metadata.name|grep -v ^$|head -1)\n if [ -n \"${APIPOD}\" ]; then\n [ $(echo $(oc rsh -c watcher-api ${APIPOD} cat /etc/watcher/watcher.conf.d/00-default.conf) |grep -czPo '\\[oslo_messaging_notifications\\]\\s+driver\\s+=\\s+messagingv2\\s+transport_url\\s+=') == 1 ]\n else\n exit 1\n fi\n ]\n logger.go:42: 06:58:44 | watcher-notification/1-deploy-with-notification | + oc project watcher-kuttl-default\n \ logger.go:42: 06:58:44 | watcher-notification/1-deploy-with-notification | Already on project \"watcher-kuttl-default\" on server \"https://api.crc.testing:6443\".\n \ logger.go:42: 06:58:44 | watcher-notification/1-deploy-with-notification | ++ oc get pods -n watcher-kuttl-default -l service=watcher-api -ocustom-columns=:metadata.name\n \ logger.go:42: 06:58:44 | watcher-notification/1-deploy-with-notification | ++ head -1\n logger.go:42: 06:58:44 | watcher-notification/1-deploy-with-notification | ++ grep -v '^$'\n logger.go:42: 06:58:44 | watcher-notification/1-deploy-with-notification | + APIPOD=\n logger.go:42: 06:58:45 | watcher-notification/1-deploy-with-notification | running command: [sh -c set -euxo pipefail\n oc project watcher-kuttl-default\n \ APIPOD=$(oc get pods -n watcher-kuttl-default -l \"service=watcher-api\" -ocustom-columns=:metadata.name|grep -v ^$|head -1)\n if [ -n \"${APIPOD}\" ]; then\n [ $(echo $(oc rsh -c watcher-api ${APIPOD} cat /etc/watcher/watcher.conf.d/00-default.conf) |grep -czPo '\\[oslo_messaging_notifications\\]\\s+driver\\s+=\\s+messagingv2\\s+transport_url\\s+=') == 1 ]\n else\n exit 1\n fi\n ]\n logger.go:42: 06:58:45 | watcher-notification/1-deploy-with-notification | + oc project watcher-kuttl-default\n \ logger.go:42: 06:58:46 | watcher-notification/1-deploy-with-notification | Already on project \"watcher-kuttl-default\" on server \"https://api.crc.testing:6443\".\n \ logger.go:42: 06:58:46 | watcher-notification/1-deploy-with-notification | ++ head -1\n logger.go:42: 06:58:46 | watcher-notification/1-deploy-with-notification | ++ oc get pods -n watcher-kuttl-default -l service=watcher-api -ocustom-columns=:metadata.name\n \ logger.go:42: 06:58:46 | watcher-notification/1-deploy-with-notification | ++ grep -v '^$'\n logger.go:42: 06:58:46 | watcher-notification/1-deploy-with-notification | + APIPOD=\n logger.go:42: 06:58:47 | watcher-notification/1-deploy-with-notification | running command: [sh -c set -euxo pipefail\n oc project watcher-kuttl-default\n \ APIPOD=$(oc get pods -n watcher-kuttl-default -l \"service=watcher-api\" -ocustom-columns=:metadata.name|grep -v ^$|head -1)\n if [ -n \"${APIPOD}\" ]; then\n [ $(echo $(oc rsh -c watcher-api ${APIPOD} cat /etc/watcher/watcher.conf.d/00-default.conf) |grep -czPo '\\[oslo_messaging_notifications\\]\\s+driver\\s+=\\s+messagingv2\\s+transport_url\\s+=') == 1 ]\n else\n exit 1\n fi\n ]\n logger.go:42: 06:58:47 | watcher-notification/1-deploy-with-notification | + oc project watcher-kuttl-default\n \ logger.go:42: 06:58:47 | watcher-notification/1-deploy-with-notification | Already on project \"watcher-kuttl-default\" on server \"https://api.crc.testing:6443\".\n \ logger.go:42: 06:58:47 | watcher-notification/1-deploy-with-notification | ++ oc get pods -n watcher-kuttl-default -l service=watcher-api -ocustom-columns=:metadata.name\n \ logger.go:42: 06:58:47 | watcher-notification/1-deploy-with-notification | ++ head -1\n logger.go:42: 06:58:47 | watcher-notification/1-deploy-with-notification | ++ grep -v '^$'\n logger.go:42: 06:58:47 | watcher-notification/1-deploy-with-notification | + APIPOD=\n logger.go:42: 06:58:48 | watcher-notification/1-deploy-with-notification | running command: [sh -c set -euxo pipefail\n oc project watcher-kuttl-default\n \ APIPOD=$(oc get pods -n watcher-kuttl-default -l \"service=watcher-api\" -ocustom-columns=:metadata.name|grep -v ^$|head -1)\n if [ -n \"${APIPOD}\" ]; then\n [ $(echo $(oc rsh -c watcher-api ${APIPOD} cat /etc/watcher/watcher.conf.d/00-default.conf) |grep -czPo '\\[oslo_messaging_notifications\\]\\s+driver\\s+=\\s+messagingv2\\s+transport_url\\s+=') == 1 ]\n else\n exit 1\n fi\n ]\n logger.go:42: 06:58:48 | watcher-notification/1-deploy-with-notification | + oc project watcher-kuttl-default\n \ logger.go:42: 06:58:49 | watcher-notification/1-deploy-with-notification | Already on project \"watcher-kuttl-default\" on server \"https://api.crc.testing:6443\".\n \ logger.go:42: 06:58:49 | watcher-notification/1-deploy-with-notification | ++ oc get pods -n watcher-kuttl-default -l service=watcher-api -ocustom-columns=:metadata.name\n \ logger.go:42: 06:58:49 | watcher-notification/1-deploy-with-notification | ++ head -1\n logger.go:42: 06:58:49 | watcher-notification/1-deploy-with-notification | ++ grep -v '^$'\n logger.go:42: 06:58:49 | watcher-notification/1-deploy-with-notification | + APIPOD=\n logger.go:42: 06:58:50 | watcher-notification/1-deploy-with-notification | running command: [sh -c set -euxo pipefail\n oc project watcher-kuttl-default\n \ APIPOD=$(oc get pods -n watcher-kuttl-default -l \"service=watcher-api\" -ocustom-columns=:metadata.name|grep -v ^$|head -1)\n if [ -n \"${APIPOD}\" ]; then\n [ $(echo $(oc rsh -c watcher-api ${APIPOD} cat /etc/watcher/watcher.conf.d/00-default.conf) |grep -czPo '\\[oslo_messaging_notifications\\]\\s+driver\\s+=\\s+messagingv2\\s+transport_url\\s+=') == 1 ]\n else\n exit 1\n fi\n ]\n logger.go:42: 06:58:50 | watcher-notification/1-deploy-with-notification | + oc project watcher-kuttl-default\n \ logger.go:42: 06:58:50 | watcher-notification/1-deploy-with-notification | Already on project \"watcher-kuttl-default\" on server \"https://api.crc.testing:6443\".\n \ logger.go:42: 06:58:50 | watcher-notification/1-deploy-with-notification | ++ oc get pods -n watcher-kuttl-default -l service=watcher-api -ocustom-columns=:metadata.name\n \ logger.go:42: 06:58:50 | watcher-notification/1-deploy-with-notification | ++ head -1\n logger.go:42: 06:58:50 | watcher-notification/1-deploy-with-notification | ++ grep -v '^$'\n logger.go:42: 06:58:50 | watcher-notification/1-deploy-with-notification | + APIPOD=\n logger.go:42: 06:58:51 | watcher-notification/1-deploy-with-notification | running command: [sh -c set -euxo pipefail\n oc project watcher-kuttl-default\n \ APIPOD=$(oc get pods -n watcher-kuttl-default -l \"service=watcher-api\" -ocustom-columns=:metadata.name|grep -v ^$|head -1)\n if [ -n \"${APIPOD}\" ]; then\n [ $(echo $(oc rsh -c watcher-api ${APIPOD} cat /etc/watcher/watcher.conf.d/00-default.conf) |grep -czPo '\\[oslo_messaging_notifications\\]\\s+driver\\s+=\\s+messagingv2\\s+transport_url\\s+=') == 1 ]\n else\n exit 1\n fi\n ]\n logger.go:42: 06:58:51 | watcher-notification/1-deploy-with-notification | + oc project watcher-kuttl-default\n \ logger.go:42: 06:58:51 | watcher-notification/1-deploy-with-notification | Already on project \"watcher-kuttl-default\" on server \"https://api.crc.testing:6443\".\n \ logger.go:42: 06:58:51 | watcher-notification/1-deploy-with-notification | ++ oc get pods -n watcher-kuttl-default -l service=watcher-api -ocustom-columns=:metadata.name\n \ logger.go:42: 06:58:51 | watcher-notification/1-deploy-with-notification | ++ head -1\n logger.go:42: 06:58:51 | watcher-notification/1-deploy-with-notification | ++ grep -v '^$'\n logger.go:42: 06:58:52 | watcher-notification/1-deploy-with-notification | + APIPOD=\n logger.go:42: 06:58:53 | watcher-notification/1-deploy-with-notification | running command: [sh -c set -euxo pipefail\n oc project watcher-kuttl-default\n \ APIPOD=$(oc get pods -n watcher-kuttl-default -l \"service=watcher-api\" -ocustom-columns=:metadata.name|grep -v ^$|head -1)\n if [ -n \"${APIPOD}\" ]; then\n [ $(echo $(oc rsh -c watcher-api ${APIPOD} cat /etc/watcher/watcher.conf.d/00-default.conf) |grep -czPo '\\[oslo_messaging_notifications\\]\\s+driver\\s+=\\s+messagingv2\\s+transport_url\\s+=') == 1 ]\n else\n exit 1\n fi\n ]\n logger.go:42: 06:58:53 | watcher-notification/1-deploy-with-notification | + oc project watcher-kuttl-default\n \ logger.go:42: 06:58:53 | watcher-notification/1-deploy-with-notification | Already on project \"watcher-kuttl-default\" on server \"https://api.crc.testing:6443\".\n \ logger.go:42: 06:58:53 | watcher-notification/1-deploy-with-notification | ++ oc get pods -n watcher-kuttl-default -l service=watcher-api -ocustom-columns=:metadata.name\n \ logger.go:42: 06:58:53 | watcher-notification/1-deploy-with-notification | ++ head -1\n logger.go:42: 06:58:53 | watcher-notification/1-deploy-with-notification | ++ grep -v '^$'\n logger.go:42: 06:58:53 | watcher-notification/1-deploy-with-notification | + APIPOD=\n logger.go:42: 06:58:54 | watcher-notification/1-deploy-with-notification | running command: [sh -c set -euxo pipefail\n oc project watcher-kuttl-default\n \ APIPOD=$(oc get pods -n watcher-kuttl-default -l \"service=watcher-api\" -ocustom-columns=:metadata.name|grep -v ^$|head -1)\n if [ -n \"${APIPOD}\" ]; then\n [ $(echo $(oc rsh -c watcher-api ${APIPOD} cat /etc/watcher/watcher.conf.d/00-default.conf) |grep -czPo '\\[oslo_messaging_notifications\\]\\s+driver\\s+=\\s+messagingv2\\s+transport_url\\s+=') == 1 ]\n else\n exit 1\n fi\n ]\n logger.go:42: 06:58:54 | watcher-notification/1-deploy-with-notification | + oc project watcher-kuttl-default\n \ logger.go:42: 06:58:54 | watcher-notification/1-deploy-with-notification | Already on project \"watcher-kuttl-default\" on server \"https://api.crc.testing:6443\".\n \ logger.go:42: 06:58:54 | watcher-notification/1-deploy-with-notification | ++ oc get pods -n watcher-kuttl-default -l service=watcher-api -ocustom-columns=:metadata.name\n \ logger.go:42: 06:58:54 | watcher-notification/1-deploy-with-notification | ++ head -1\n logger.go:42: 06:58:54 | watcher-notification/1-deploy-with-notification | ++ grep -v '^$'\n logger.go:42: 06:58:54 | watcher-notification/1-deploy-with-notification | + APIPOD=\n logger.go:42: 06:58:55 | watcher-notification/1-deploy-with-notification | running command: [sh -c set -euxo pipefail\n oc project watcher-kuttl-default\n \ APIPOD=$(oc get pods -n watcher-kuttl-default -l \"service=watcher-api\" -ocustom-columns=:metadata.name|grep -v ^$|head -1)\n if [ -n \"${APIPOD}\" ]; then\n [ $(echo $(oc rsh -c watcher-api ${APIPOD} cat /etc/watcher/watcher.conf.d/00-default.conf) |grep -czPo '\\[oslo_messaging_notifications\\]\\s+driver\\s+=\\s+messagingv2\\s+transport_url\\s+=') == 1 ]\n else\n exit 1\n fi\n ]\n logger.go:42: 06:58:55 | watcher-notification/1-deploy-with-notification | + oc project watcher-kuttl-default\n \ logger.go:42: 06:58:55 | watcher-notification/1-deploy-with-notification | Already on project \"watcher-kuttl-default\" on server \"https://api.crc.testing:6443\".\n \ logger.go:42: 06:58:55 | watcher-notification/1-deploy-with-notification | ++ grep -v '^$'\n logger.go:42: 06:58:55 | watcher-notification/1-deploy-with-notification | ++ head -1\n logger.go:42: 06:58:55 | watcher-notification/1-deploy-with-notification | ++ oc get pods -n watcher-kuttl-default -l service=watcher-api -ocustom-columns=:metadata.name\n \ logger.go:42: 06:58:56 | watcher-notification/1-deploy-with-notification | + APIPOD=\n logger.go:42: 06:58:57 | watcher-notification/1-deploy-with-notification | running command: [sh -c set -euxo pipefail\n oc project watcher-kuttl-default\n \ APIPOD=$(oc get pods -n watcher-kuttl-default -l \"service=watcher-api\" -ocustom-columns=:metadata.name|grep -v ^$|head -1)\n if [ -n \"${APIPOD}\" ]; then\n [ $(echo $(oc rsh -c watcher-api ${APIPOD} cat /etc/watcher/watcher.conf.d/00-default.conf) |grep -czPo '\\[oslo_messaging_notifications\\]\\s+driver\\s+=\\s+messagingv2\\s+transport_url\\s+=') == 1 ]\n else\n exit 1\n fi\n ]\n logger.go:42: 06:58:57 | watcher-notification/1-deploy-with-notification | + oc project watcher-kuttl-default\n \ logger.go:42: 06:58:57 | watcher-notification/1-deploy-with-notification | Already on project \"watcher-kuttl-default\" on server \"https://api.crc.testing:6443\".\n \ logger.go:42: 06:58:57 | watcher-notification/1-deploy-with-notification | ++ oc get pods -n watcher-kuttl-default -l service=watcher-api -ocustom-columns=:metadata.name\n \ logger.go:42: 06:58:57 | watcher-notification/1-deploy-with-notification | ++ head -1\n logger.go:42: 06:58:57 | watcher-notification/1-deploy-with-notification | ++ grep -v '^$'\n logger.go:42: 06:58:57 | watcher-notification/1-deploy-with-notification | + APIPOD=\n logger.go:42: 06:58:58 | watcher-notification/1-deploy-with-notification | running command: [sh -c set -euxo pipefail\n oc project watcher-kuttl-default\n \ APIPOD=$(oc get pods -n watcher-kuttl-default -l \"service=watcher-api\" -ocustom-columns=:metadata.name|grep -v ^$|head -1)\n if [ -n \"${APIPOD}\" ]; then\n [ $(echo $(oc rsh -c watcher-api ${APIPOD} cat /etc/watcher/watcher.conf.d/00-default.conf) |grep -czPo '\\[oslo_messaging_notifications\\]\\s+driver\\s+=\\s+messagingv2\\s+transport_url\\s+=') == 1 ]\n else\n exit 1\n fi\n ]\n logger.go:42: 06:58:58 | watcher-notification/1-deploy-with-notification | + oc project watcher-kuttl-default\n \ logger.go:42: 06:58:58 | watcher-notification/1-deploy-with-notification | Already on project \"watcher-kuttl-default\" on server \"https://api.crc.testing:6443\".\n \ logger.go:42: 06:58:58 | watcher-notification/1-deploy-with-notification | ++ oc get pods -n watcher-kuttl-default -l service=watcher-api -ocustom-columns=:metadata.name\n \ logger.go:42: 06:58:58 | watcher-notification/1-deploy-with-notification | ++ head -1\n logger.go:42: 06:58:58 | watcher-notification/1-deploy-with-notification | ++ grep -v '^$'\n logger.go:42: 06:58:58 | watcher-notification/1-deploy-with-notification | + APIPOD=\n logger.go:42: 06:58:59 | watcher-notification/1-deploy-with-notification | running command: [sh -c set -euxo pipefail\n oc project watcher-kuttl-default\n \ APIPOD=$(oc get pods -n watcher-kuttl-default -l \"service=watcher-api\" -ocustom-columns=:metadata.name|grep -v ^$|head -1)\n if [ -n \"${APIPOD}\" ]; then\n [ $(echo $(oc rsh -c watcher-api ${APIPOD} cat /etc/watcher/watcher.conf.d/00-default.conf) |grep -czPo '\\[oslo_messaging_notifications\\]\\s+driver\\s+=\\s+messagingv2\\s+transport_url\\s+=') == 1 ]\n else\n exit 1\n fi\n ]\n logger.go:42: 06:58:59 | watcher-notification/1-deploy-with-notification | + oc project watcher-kuttl-default\n \ logger.go:42: 06:59:00 | watcher-notification/1-deploy-with-notification | Already on project \"watcher-kuttl-default\" on server \"https://api.crc.testing:6443\".\n \ logger.go:42: 06:59:00 | watcher-notification/1-deploy-with-notification | ++ oc get pods -n watcher-kuttl-default -l service=watcher-api -ocustom-columns=:metadata.name\n \ logger.go:42: 06:59:00 | watcher-notification/1-deploy-with-notification | ++ head -1\n logger.go:42: 06:59:00 | watcher-notification/1-deploy-with-notification | ++ grep -v '^$'\n logger.go:42: 06:59:00 | watcher-notification/1-deploy-with-notification | + APIPOD=\n logger.go:42: 06:59:01 | watcher-notification/1-deploy-with-notification | running command: [sh -c set -euxo pipefail\n oc project watcher-kuttl-default\n \ APIPOD=$(oc get pods -n watcher-kuttl-default -l \"service=watcher-api\" -ocustom-columns=:metadata.name|grep -v ^$|head -1)\n if [ -n \"${APIPOD}\" ]; then\n [ $(echo $(oc rsh -c watcher-api ${APIPOD} cat /etc/watcher/watcher.conf.d/00-default.conf) |grep -czPo '\\[oslo_messaging_notifications\\]\\s+driver\\s+=\\s+messagingv2\\s+transport_url\\s+=') == 1 ]\n else\n exit 1\n fi\n ]\n logger.go:42: 06:59:01 | watcher-notification/1-deploy-with-notification | + oc project watcher-kuttl-default\n \ logger.go:42: 06:59:01 | watcher-notification/1-deploy-with-notification | Already on project \"watcher-kuttl-default\" on server \"https://api.crc.testing:6443\".\n \ logger.go:42: 06:59:01 | watcher-notification/1-deploy-with-notification | ++ oc get pods -n watcher-kuttl-default -l service=watcher-api -ocustom-columns=:metadata.name\n \ logger.go:42: 06:59:01 | watcher-notification/1-deploy-with-notification | ++ head -1\n logger.go:42: 06:59:01 | watcher-notification/1-deploy-with-notification | ++ grep -v '^$'\n logger.go:42: 06:59:01 | watcher-notification/1-deploy-with-notification | + APIPOD=\n logger.go:42: 06:59:02 | watcher-notification/1-deploy-with-notification | running command: [sh -c set -euxo pipefail\n oc project watcher-kuttl-default\n \ APIPOD=$(oc get pods -n watcher-kuttl-default -l \"service=watcher-api\" -ocustom-columns=:metadata.name|grep -v ^$|head -1)\n if [ -n \"${APIPOD}\" ]; then\n [ $(echo $(oc rsh -c watcher-api ${APIPOD} cat /etc/watcher/watcher.conf.d/00-default.conf) |grep -czPo '\\[oslo_messaging_notifications\\]\\s+driver\\s+=\\s+messagingv2\\s+transport_url\\s+=') == 1 ]\n else\n exit 1\n fi\n ]\n logger.go:42: 06:59:02 | watcher-notification/1-deploy-with-notification | + oc project watcher-kuttl-default\n \ logger.go:42: 06:59:03 | watcher-notification/1-deploy-with-notification | Already on project \"watcher-kuttl-default\" on server \"https://api.crc.testing:6443\".\n \ logger.go:42: 06:59:03 | watcher-notification/1-deploy-with-notification | ++ oc get pods -n watcher-kuttl-default -l service=watcher-api -ocustom-columns=:metadata.name\n \ logger.go:42: 06:59:03 | watcher-notification/1-deploy-with-notification | ++ head -1\n logger.go:42: 06:59:03 | watcher-notification/1-deploy-with-notification | ++ grep -v '^$'\n logger.go:42: 06:59:03 | watcher-notification/1-deploy-with-notification | + APIPOD=\n logger.go:42: 06:59:04 | watcher-notification/1-deploy-with-notification | running command: [sh -c set -euxo pipefail\n oc project watcher-kuttl-default\n \ APIPOD=$(oc get pods -n watcher-kuttl-default -l \"service=watcher-api\" -ocustom-columns=:metadata.name|grep -v ^$|head -1)\n if [ -n \"${APIPOD}\" ]; then\n [ $(echo $(oc rsh -c watcher-api ${APIPOD} cat /etc/watcher/watcher.conf.d/00-default.conf) |grep -czPo '\\[oslo_messaging_notifications\\]\\s+driver\\s+=\\s+messagingv2\\s+transport_url\\s+=') == 1 ]\n else\n exit 1\n fi\n ]\n logger.go:42: 06:59:04 | watcher-notification/1-deploy-with-notification | + oc project watcher-kuttl-default\n \ logger.go:42: 06:59:04 | watcher-notification/1-deploy-with-notification | Already on project \"watcher-kuttl-default\" on server \"https://api.crc.testing:6443\".\n \ logger.go:42: 06:59:04 | watcher-notification/1-deploy-with-notification | ++ oc get pods -n watcher-kuttl-default -l service=watcher-api -ocustom-columns=:metadata.name\n \ logger.go:42: 06:59:04 | watcher-notification/1-deploy-with-notification | ++ head -1\n logger.go:42: 06:59:04 | watcher-notification/1-deploy-with-notification | ++ grep -v '^$'\n logger.go:42: 06:59:04 | watcher-notification/1-deploy-with-notification | + APIPOD=\n logger.go:42: 06:59:05 | watcher-notification/1-deploy-with-notification | running command: [sh -c set -euxo pipefail\n oc project watcher-kuttl-default\n \ APIPOD=$(oc get pods -n watcher-kuttl-default -l \"service=watcher-api\" -ocustom-columns=:metadata.name|grep -v ^$|head -1)\n if [ -n \"${APIPOD}\" ]; then\n [ $(echo $(oc rsh -c watcher-api ${APIPOD} cat /etc/watcher/watcher.conf.d/00-default.conf) |grep -czPo '\\[oslo_messaging_notifications\\]\\s+driver\\s+=\\s+messagingv2\\s+transport_url\\s+=') == 1 ]\n else\n exit 1\n fi\n ]\n logger.go:42: 06:59:05 | watcher-notification/1-deploy-with-notification | + oc project watcher-kuttl-default\n \ logger.go:42: 06:59:05 | watcher-notification/1-deploy-with-notification | Already on project \"watcher-kuttl-default\" on server \"https://api.crc.testing:6443\".\n \ logger.go:42: 06:59:05 | watcher-notification/1-deploy-with-notification | ++ oc get pods -n watcher-kuttl-default -l service=watcher-api -ocustom-columns=:metadata.name\n \ logger.go:42: 06:59:05 | watcher-notification/1-deploy-with-notification | ++ head -1\n logger.go:42: 06:59:05 | watcher-notification/1-deploy-with-notification | ++ grep -v '^$'\n logger.go:42: 06:59:05 | watcher-notification/1-deploy-with-notification | + APIPOD=\n logger.go:42: 06:59:06 | watcher-notification/1-deploy-with-notification | running command: [sh -c set -euxo pipefail\n oc project watcher-kuttl-default\n \ APIPOD=$(oc get pods -n watcher-kuttl-default -l \"service=watcher-api\" -ocustom-columns=:metadata.name|grep -v ^$|head -1)\n if [ -n \"${APIPOD}\" ]; then\n [ $(echo $(oc rsh -c watcher-api ${APIPOD} cat /etc/watcher/watcher.conf.d/00-default.conf) |grep -czPo '\\[oslo_messaging_notifications\\]\\s+driver\\s+=\\s+messagingv2\\s+transport_url\\s+=') == 1 ]\n else\n exit 1\n fi\n ]\n logger.go:42: 06:59:06 | watcher-notification/1-deploy-with-notification | + oc project watcher-kuttl-default\n \ logger.go:42: 06:59:07 | watcher-notification/1-deploy-with-notification | Already on project \"watcher-kuttl-default\" on server \"https://api.crc.testing:6443\".\n \ logger.go:42: 06:59:07 | watcher-notification/1-deploy-with-notification | ++ oc get pods -n watcher-kuttl-default -l service=watcher-api -ocustom-columns=:metadata.name\n \ logger.go:42: 06:59:07 | watcher-notification/1-deploy-with-notification | ++ head -1\n logger.go:42: 06:59:07 | watcher-notification/1-deploy-with-notification | ++ grep -v '^$'\n logger.go:42: 06:59:07 | watcher-notification/1-deploy-with-notification | + APIPOD=\n logger.go:42: 06:59:08 | watcher-notification/1-deploy-with-notification | running command: [sh -c set -euxo pipefail\n oc project watcher-kuttl-default\n \ APIPOD=$(oc get pods -n watcher-kuttl-default -l \"service=watcher-api\" -ocustom-columns=:metadata.name|grep -v ^$|head -1)\n if [ -n \"${APIPOD}\" ]; then\n [ $(echo $(oc rsh -c watcher-api ${APIPOD} cat /etc/watcher/watcher.conf.d/00-default.conf) |grep -czPo '\\[oslo_messaging_notifications\\]\\s+driver\\s+=\\s+messagingv2\\s+transport_url\\s+=') == 1 ]\n else\n exit 1\n fi\n ]\n logger.go:42: 06:59:08 | watcher-notification/1-deploy-with-notification | + oc project watcher-kuttl-default\n \ logger.go:42: 06:59:08 | watcher-notification/1-deploy-with-notification | Already on project \"watcher-kuttl-default\" on server \"https://api.crc.testing:6443\".\n \ logger.go:42: 06:59:08 | watcher-notification/1-deploy-with-notification | ++ oc get pods -n watcher-kuttl-default -l service=watcher-api -ocustom-columns=:metadata.name\n \ logger.go:42: 06:59:08 | watcher-notification/1-deploy-with-notification | ++ grep -v '^$'\n logger.go:42: 06:59:08 | watcher-notification/1-deploy-with-notification | ++ head -1\n logger.go:42: 06:59:08 | watcher-notification/1-deploy-with-notification | + APIPOD=\n logger.go:42: 06:59:09 | watcher-notification/1-deploy-with-notification | running command: [sh -c set -euxo pipefail\n oc project watcher-kuttl-default\n \ APIPOD=$(oc get pods -n watcher-kuttl-default -l \"service=watcher-api\" -ocustom-columns=:metadata.name|grep -v ^$|head -1)\n if [ -n \"${APIPOD}\" ]; then\n [ $(echo $(oc rsh -c watcher-api ${APIPOD} cat /etc/watcher/watcher.conf.d/00-default.conf) |grep -czPo '\\[oslo_messaging_notifications\\]\\s+driver\\s+=\\s+messagingv2\\s+transport_url\\s+=') == 1 ]\n else\n exit 1\n fi\n ]\n logger.go:42: 06:59:09 | watcher-notification/1-deploy-with-notification | + oc project watcher-kuttl-default\n \ logger.go:42: 06:59:10 | watcher-notification/1-deploy-with-notification | Already on project \"watcher-kuttl-default\" on server \"https://api.crc.testing:6443\".\n \ logger.go:42: 06:59:10 | watcher-notification/1-deploy-with-notification | ++ oc get pods -n watcher-kuttl-default -l service=watcher-api -ocustom-columns=:metadata.name\n \ logger.go:42: 06:59:10 | watcher-notification/1-deploy-with-notification | ++ head -1\n logger.go:42: 06:59:10 | watcher-notification/1-deploy-with-notification | ++ grep -v '^$'\n logger.go:42: 06:59:10 | watcher-notification/1-deploy-with-notification | + APIPOD=watcher-kuttl-api-0\n logger.go:42: 06:59:10 | watcher-notification/1-deploy-with-notification | + '[' -n watcher-kuttl-api-0 ']'\n logger.go:42: 06:59:10 | watcher-notification/1-deploy-with-notification | ++ grep -czPo '\\[oslo_messaging_notifications\\]\\s+driver\\s+=\\s+messagingv2\\s+transport_url\\s+='\n \ logger.go:42: 06:59:10 | watcher-notification/1-deploy-with-notification | +++ oc rsh -c watcher-api watcher-kuttl-api-0 cat /etc/watcher/watcher.conf.d/00-default.conf\n \ logger.go:42: 06:59:10 | watcher-notification/1-deploy-with-notification | ++ echo '[DEFAULT]' state_path = /var/lib/watcher transport_url = 'rabbit://**********=1' control_exchange = watcher debug = True log_file = /var/log/watcher/watcher-kuttl-api.log '#' empty notification_level means that no notification will be sent notification_level = '[database]' connection = 'mysql+pymysql://watcher_49c6:586030fce1b9af1d5684354c0bd591b5@openstack.watcher-kuttl-default.svc/watcher?read_default_file=/etc/my.cnf' '[oslo_policy]' policy_file = /etc/watcher/policy.yaml.sample '[oslo_messaging_notifications]' driver = messagingv2 transport_url = 'rabbit://**********=1' '[oslo_messaging_rabbit]' rabbit_quorum_queue=true rabbit_transient_quorum_queue=true amqp_durable_queues=true '[keystone_authtoken]' memcached_servers=memcached-0.memcached.watcher-kuttl-default.svc:11212 project_domain_name = Default project_name = service user_domain_name = Default password = pa**********rd username = watcher auth_url = https://keystone-internal.watcher-kuttl-default.svc:5000 interface = internal auth_type = password cafile = /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem '[watcher_clients_auth]' project_domain_name = Default project_name = service user_domain_name = Default password = pa**********rd username = watcher auth_url = https://keystone-internal.watcher-kuttl-default.svc:5000 interface = internal auth_type = password cafile = /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem '[oslo_concurrency]' lock_path = /var/lib/watcher/tmp '[watcher_datasources]' datasources = prometheus '[cache]' backend = oslo_cache.memcache_pool memcache_servers=memcached-0.memcached.watcher-kuttl-default.svc:11212 memcache_socket_timeout = 0.5 memcache_pool_connection_get_timeout = 1 enabled=true tls_enabled=true memcache_dead_retry = 30 '[prometheus_client]' host = metric-storage-prometheus.watcher-kuttl-default.svc port = 9090 cafile = /etc/pki/ca-trust/extracted/pem/prometheus/ca.crt '[cinder_client]' endpoint_type = internal '[glance_client]' endpoint_type = internal '[ironic_client]' endpoint_type = internal '[keystone_client]' interface = internal '[neutron_client]' endpoint_type = internal '[nova_client]' endpoint_type = internal '[placement_client]' interface = internal '[watcher_cluster_data_model_collectors.compute]' period = 900 '[watcher_cluster_data_model_collectors.baremetal]' period = 900 '[watcher_cluster_data_model_collectors.storage]' period = 900\n logger.go:42: 06:59:10 | watcher-notification/1-deploy-with-notification | + '[' 1 == 1 ']'\n logger.go:42: 06:59:11 | watcher-notification/1-deploy-with-notification | running command: [sh -c set -euxo pipefail\n oc project watcher-kuttl-default\n \ APIPOD=$(oc get pods -n watcher-kuttl-default -l \"service=watcher-api\" -ocustom-columns=:metadata.name|grep -v ^$|head -1)\n if [ -n \"${APIPOD}\" ]; then\n [ $(echo $(oc rsh -c watcher-api ${APIPOD} cat /etc/watcher/watcher.conf.d/00-default.conf) |grep -czPo '\\[oslo_messaging_notifications\\]\\s+driver\\s+=\\s+messagingv2\\s+transport_url\\s+=') == 1 ]\n else\n exit 1\n fi\n ]\n logger.go:42: 06:59:11 | watcher-notification/1-deploy-with-notification | + oc project watcher-kuttl-default\n \ logger.go:42: 06:59:11 | watcher-notification/1-deploy-with-notification | Already on project \"watcher-kuttl-default\" on server \"https://api.crc.testing:6443\".\n \ logger.go:42: 06:59:11 | watcher-notification/1-deploy-with-notification | ++ oc get pods -n watcher-kuttl-default -l service=watcher-api -ocustom-columns=:metadata.name\n \ logger.go:42: 06:59:11 | watcher-notification/1-deploy-with-notification | ++ head -1\n logger.go:42: 06:59:11 | watcher-notification/1-deploy-with-notification | ++ grep -v '^$'\n logger.go:42: 06:59:11 | watcher-notification/1-deploy-with-notification | + APIPOD=watcher-kuttl-api-0\n logger.go:42: 06:59:11 | watcher-notification/1-deploy-with-notification | + '[' -n watcher-kuttl-api-0 ']'\n logger.go:42: 06:59:11 | watcher-notification/1-deploy-with-notification | ++ grep -czPo '\\[oslo_messaging_notifications\\]\\s+driver\\s+=\\s+messagingv2\\s+transport_url\\s+='\n \ logger.go:42: 06:59:11 | watcher-notification/1-deploy-with-notification | +++ oc rsh -c watcher-api watcher-kuttl-api-0 cat /etc/watcher/watcher.conf.d/00-default.conf\n \ logger.go:42: 06:59:12 | watcher-notification/1-deploy-with-notification | ++ echo '[DEFAULT]' state_path = /var/lib/watcher transport_url = 'rabbit://**********=1' control_exchange = watcher debug = True log_file = /var/log/watcher/watcher-kuttl-api.log '#' empty notification_level means that no notification will be sent notification_level = '[database]' connection = 'mysql+pymysql://watcher_49c6:586030fce1b9af1d5684354c0bd591b5@openstack.watcher-kuttl-default.svc/watcher?read_default_file=/etc/my.cnf' '[oslo_policy]' policy_file = /etc/watcher/policy.yaml.sample '[oslo_messaging_notifications]' driver = messagingv2 transport_url = 'rabbit://**********=1' '[oslo_messaging_rabbit]' rabbit_quorum_queue=true rabbit_transient_quorum_queue=true amqp_durable_queues=true '[keystone_authtoken]' memcached_servers=memcached-0.memcached.watcher-kuttl-default.svc:11212 project_domain_name = Default project_name = service user_domain_name = Default password = pa**********rd username = watcher auth_url = https://keystone-internal.watcher-kuttl-default.svc:5000 interface = internal auth_type = password cafile = /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem '[watcher_clients_auth]' project_domain_name = Default project_name = service user_domain_name = Default password = pa**********rd username = watcher auth_url = https://keystone-internal.watcher-kuttl-default.svc:5000 interface = internal auth_type = password cafile = /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem '[oslo_concurrency]' lock_path = /var/lib/watcher/tmp '[watcher_datasources]' datasources = prometheus '[cache]' backend = oslo_cache.memcache_pool memcache_servers=memcached-0.memcached.watcher-kuttl-default.svc:11212 memcache_socket_timeout = 0.5 memcache_pool_connection_get_timeout = 1 enabled=true tls_enabled=true memcache_dead_retry = 30 '[prometheus_client]' host = metric-storage-prometheus.watcher-kuttl-default.svc port = 9090 cafile = /etc/pki/ca-trust/extracted/pem/prometheus/ca.crt '[cinder_client]' endpoint_type = internal '[glance_client]' endpoint_type = internal '[ironic_client]' endpoint_type = internal '[keystone_client]' interface = internal '[neutron_client]' endpoint_type = internal '[nova_client]' endpoint_type = internal '[placement_client]' interface = internal '[watcher_cluster_data_model_collectors.compute]' period = 900 '[watcher_cluster_data_model_collectors.baremetal]' period = 900 '[watcher_cluster_data_model_collectors.storage]' period = 900\n logger.go:42: 06:59:12 | watcher-notification/1-deploy-with-notification | + '[' 1 == 1 ']'\n logger.go:42: 06:59:13 | watcher-notification/1-deploy-with-notification | running command: [sh -c set -euxo pipefail\n oc project watcher-kuttl-default\n \ APIPOD=$(oc get pods -n watcher-kuttl-default -l \"service=watcher-api\" -ocustom-columns=:metadata.name|grep -v ^$|head -1)\n if [ -n \"${APIPOD}\" ]; then\n [ $(echo $(oc rsh -c watcher-api ${APIPOD} cat /etc/watcher/watcher.conf.d/00-default.conf) |grep -czPo '\\[oslo_messaging_notifications\\]\\s+driver\\s+=\\s+messagingv2\\s+transport_url\\s+=') == 1 ]\n else\n exit 1\n fi\n ]\n logger.go:42: 06:59:13 | watcher-notification/1-deploy-with-notification | + oc project watcher-kuttl-default\n \ logger.go:42: 06:59:13 | watcher-notification/1-deploy-with-notification | Already on project \"watcher-kuttl-default\" on server \"https://api.crc.testing:6443\".\n \ logger.go:42: 06:59:13 | watcher-notification/1-deploy-with-notification | ++ oc get pods -n watcher-kuttl-default -l service=watcher-api -ocustom-columns=:metadata.name\n \ logger.go:42: 06:59:13 | watcher-notification/1-deploy-with-notification | ++ head -1\n logger.go:42: 06:59:13 | watcher-notification/1-deploy-with-notification | ++ grep -v '^$'\n logger.go:42: 06:59:13 | watcher-notification/1-deploy-with-notification | + APIPOD=watcher-kuttl-api-0\n logger.go:42: 06:59:13 | watcher-notification/1-deploy-with-notification | + '[' -n watcher-kuttl-api-0 ']'\n logger.go:42: 06:59:13 | watcher-notification/1-deploy-with-notification | ++ grep -czPo '\\[oslo_messaging_notifications\\]\\s+driver\\s+=\\s+messagingv2\\s+transport_url\\s+='\n \ logger.go:42: 06:59:13 | watcher-notification/1-deploy-with-notification | +++ oc rsh -c watcher-api watcher-kuttl-api-0 cat /etc/watcher/watcher.conf.d/00-default.conf\n \ logger.go:42: 06:59:13 | watcher-notification/1-deploy-with-notification | ++ echo '[DEFAULT]' state_path = /var/lib/watcher transport_url = 'rabbit://**********=1' control_exchange = watcher debug = True log_file = /var/log/watcher/watcher-kuttl-api.log '#' empty notification_level means that no notification will be sent notification_level = '[database]' connection = 'mysql+pymysql://watcher_49c6:586030fce1b9af1d5684354c0bd591b5@openstack.watcher-kuttl-default.svc/watcher?read_default_file=/etc/my.cnf' '[oslo_policy]' policy_file = /etc/watcher/policy.yaml.sample '[oslo_messaging_notifications]' driver = messagingv2 transport_url = 'rabbit://**********=1' '[oslo_messaging_rabbit]' rabbit_quorum_queue=true rabbit_transient_quorum_queue=true amqp_durable_queues=true '[keystone_authtoken]' memcached_servers=memcached-0.memcached.watcher-kuttl-default.svc:11212 project_domain_name = Default project_name = service user_domain_name = Default password = pa**********rd username = watcher auth_url = https://keystone-internal.watcher-kuttl-default.svc:5000 interface = internal auth_type = password cafile = /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem '[watcher_clients_auth]' project_domain_name = Default project_name = service user_domain_name = Default password = pa**********rd username = watcher auth_url = https://keystone-internal.watcher-kuttl-default.svc:5000 interface = internal auth_type = password cafile = /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem '[oslo_concurrency]' lock_path = /var/lib/watcher/tmp '[watcher_datasources]' datasources = prometheus '[cache]' backend = oslo_cache.memcache_pool memcache_servers=memcached-0.memcached.watcher-kuttl-default.svc:11212 memcache_socket_timeout = 0.5 memcache_pool_connection_get_timeout = 1 enabled=true tls_enabled=true memcache_dead_retry = 30 '[prometheus_client]' host = metric-storage-prometheus.watcher-kuttl-default.svc port = 9090 cafile = /etc/pki/ca-trust/extracted/pem/prometheus/ca.crt '[cinder_client]' endpoint_type = internal '[glance_client]' endpoint_type = internal '[ironic_client]' endpoint_type = internal '[keystone_client]' interface = internal '[neutron_client]' endpoint_type = internal '[nova_client]' endpoint_type = internal '[placement_client]' interface = internal '[watcher_cluster_data_model_collectors.compute]' period = 900 '[watcher_cluster_data_model_collectors.baremetal]' period = 900 '[watcher_cluster_data_model_collectors.storage]' period = 900\n logger.go:42: 06:59:13 | watcher-notification/1-deploy-with-notification | + '[' 1 == 1 ']'\n logger.go:42: 06:59:15 | watcher-notification/1-deploy-with-notification | running command: [sh -c set -euxo pipefail\n oc project watcher-kuttl-default\n \ APIPOD=$(oc get pods -n watcher-kuttl-default -l \"service=watcher-api\" -ocustom-columns=:metadata.name|grep -v ^$|head -1)\n if [ -n \"${APIPOD}\" ]; then\n [ $(echo $(oc rsh -c watcher-api ${APIPOD} cat /etc/watcher/watcher.conf.d/00-default.conf) |grep -czPo '\\[oslo_messaging_notifications\\]\\s+driver\\s+=\\s+messagingv2\\s+transport_url\\s+=') == 1 ]\n else\n exit 1\n fi\n ]\n logger.go:42: 06:59:15 | watcher-notification/1-deploy-with-notification | + oc project watcher-kuttl-default\n \ logger.go:42: 06:59:15 | watcher-notification/1-deploy-with-notification | Already on project \"watcher-kuttl-default\" on server \"https://api.crc.testing:6443\".\n \ logger.go:42: 06:59:15 | watcher-notification/1-deploy-with-notification | ++ oc get pods -n watcher-kuttl-default -l service=watcher-api -ocustom-columns=:metadata.name\n \ logger.go:42: 06:59:15 | watcher-notification/1-deploy-with-notification | ++ head -1\n logger.go:42: 06:59:15 | watcher-notification/1-deploy-with-notification | ++ grep -v '^$'\n logger.go:42: 06:59:15 | watcher-notification/1-deploy-with-notification | + APIPOD=watcher-kuttl-api-0\n logger.go:42: 06:59:15 | watcher-notification/1-deploy-with-notification | + '[' -n watcher-kuttl-api-0 ']'\n logger.go:42: 06:59:15 | watcher-notification/1-deploy-with-notification | ++ grep -czPo '\\[oslo_messaging_notifications\\]\\s+driver\\s+=\\s+messagingv2\\s+transport_url\\s+='\n \ logger.go:42: 06:59:15 | watcher-notification/1-deploy-with-notification | +++ oc rsh -c watcher-api watcher-kuttl-api-0 cat /etc/watcher/watcher.conf.d/00-default.conf\n \ logger.go:42: 06:59:15 | watcher-notification/1-deploy-with-notification | ++ echo '[DEFAULT]' state_path = /var/lib/watcher transport_url = 'rabbit://**********=1' control_exchange = watcher debug = True log_file = /var/log/watcher/watcher-kuttl-api.log '#' empty notification_level means that no notification will be sent notification_level = '[database]' connection = 'mysql+pymysql://watcher_49c6:586030fce1b9af1d5684354c0bd591b5@openstack.watcher-kuttl-default.svc/watcher?read_default_file=/etc/my.cnf' '[oslo_policy]' policy_file = /etc/watcher/policy.yaml.sample '[oslo_messaging_notifications]' driver = messagingv2 transport_url = 'rabbit://**********=1' '[oslo_messaging_rabbit]' rabbit_quorum_queue=true rabbit_transient_quorum_queue=true amqp_durable_queues=true '[keystone_authtoken]' memcached_servers=memcached-0.memcached.watcher-kuttl-default.svc:11212 project_domain_name = Default project_name = service user_domain_name = Default password = pa**********rd username = watcher auth_url = https://keystone-internal.watcher-kuttl-default.svc:5000 interface = internal auth_type = password cafile = /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem '[watcher_clients_auth]' project_domain_name = Default project_name = service user_domain_name = Default password = pa**********rd username = watcher auth_url = https://keystone-internal.watcher-kuttl-default.svc:5000 interface = internal auth_type = password cafile = /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem '[oslo_concurrency]' lock_path = /var/lib/watcher/tmp '[watcher_datasources]' datasources = prometheus '[cache]' backend = oslo_cache.memcache_pool memcache_servers=memcached-0.memcached.watcher-kuttl-default.svc:11212 memcache_socket_timeout = 0.5 memcache_pool_connection_get_timeout = 1 enabled=true tls_enabled=true memcache_dead_retry = 30 '[prometheus_client]' host = metric-storage-prometheus.watcher-kuttl-default.svc port = 9090 cafile = /etc/pki/ca-trust/extracted/pem/prometheus/ca.crt '[cinder_client]' endpoint_type = internal '[glance_client]' endpoint_type = internal '[ironic_client]' endpoint_type = internal '[keystone_client]' interface = internal '[neutron_client]' endpoint_type = internal '[nova_client]' endpoint_type = internal '[placement_client]' interface = internal '[watcher_cluster_data_model_collectors.compute]' period = 900 '[watcher_cluster_data_model_collectors.baremetal]' period = 900 '[watcher_cluster_data_model_collectors.storage]' period = 900\n logger.go:42: 06:59:15 | watcher-notification/1-deploy-with-notification | + '[' 1 == 1 ']'\n logger.go:42: 06:59:16 | watcher-notification/1-deploy-with-notification | running command: [sh -c set -euxo pipefail\n oc project watcher-kuttl-default\n \ APIPOD=$(oc get pods -n watcher-kuttl-default -l \"service=watcher-api\" -ocustom-columns=:metadata.name|grep -v ^$|head -1)\n if [ -n \"${APIPOD}\" ]; then\n [ $(echo $(oc rsh -c watcher-api ${APIPOD} cat /etc/watcher/watcher.conf.d/00-default.conf) |grep -czPo '\\[oslo_messaging_notifications\\]\\s+driver\\s+=\\s+messagingv2\\s+transport_url\\s+=') == 1 ]\n else\n exit 1\n fi\n ]\n logger.go:42: 06:59:16 | watcher-notification/1-deploy-with-notification | + oc project watcher-kuttl-default\n \ logger.go:42: 06:59:17 | watcher-notification/1-deploy-with-notification | Already on project \"watcher-kuttl-default\" on server \"https://api.crc.testing:6443\".\n \ logger.go:42: 06:59:17 | watcher-notification/1-deploy-with-notification | ++ oc get pods -n watcher-kuttl-default -l service=watcher-api -ocustom-columns=:metadata.name\n \ logger.go:42: 06:59:17 | watcher-notification/1-deploy-with-notification | ++ head -1\n logger.go:42: 06:59:17 | watcher-notification/1-deploy-with-notification | ++ grep -v '^$'\n logger.go:42: 06:59:17 | watcher-notification/1-deploy-with-notification | + APIPOD=watcher-kuttl-api-0\n logger.go:42: 06:59:17 | watcher-notification/1-deploy-with-notification | + '[' -n watcher-kuttl-api-0 ']'\n logger.go:42: 06:59:17 | watcher-notification/1-deploy-with-notification | ++ grep -czPo '\\[oslo_messaging_notifications\\]\\s+driver\\s+=\\s+messagingv2\\s+transport_url\\s+='\n \ logger.go:42: 06:59:17 | watcher-notification/1-deploy-with-notification | +++ oc rsh -c watcher-api watcher-kuttl-api-0 cat /etc/watcher/watcher.conf.d/00-default.conf\n \ logger.go:42: 06:59:17 | watcher-notification/1-deploy-with-notification | ++ echo '[DEFAULT]' state_path = /var/lib/watcher transport_url = 'rabbit://**********=1' control_exchange = watcher debug = True log_file = /var/log/watcher/watcher-kuttl-api.log '#' empty notification_level means that no notification will be sent notification_level = '[database]' connection = 'mysql+pymysql://watcher_49c6:586030fce1b9af1d5684354c0bd591b5@openstack.watcher-kuttl-default.svc/watcher?read_default_file=/etc/my.cnf' '[oslo_policy]' policy_file = /etc/watcher/policy.yaml.sample '[oslo_messaging_notifications]' driver = messagingv2 transport_url = 'rabbit://**********=1' '[oslo_messaging_rabbit]' rabbit_quorum_queue=true rabbit_transient_quorum_queue=true amqp_durable_queues=true '[keystone_authtoken]' memcached_servers=memcached-0.memcached.watcher-kuttl-default.svc:11212 project_domain_name = Default project_name = service user_domain_name = Default password = pa**********rd username = watcher auth_url = https://keystone-internal.watcher-kuttl-default.svc:5000 interface = internal auth_type = password cafile = /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem '[watcher_clients_auth]' project_domain_name = Default project_name = service user_domain_name = Default password = pa**********rd username = watcher auth_url = https://keystone-internal.watcher-kuttl-default.svc:5000 interface = internal auth_type = password cafile = /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem '[oslo_concurrency]' lock_path = /var/lib/watcher/tmp '[watcher_datasources]' datasources = prometheus '[cache]' backend = oslo_cache.memcache_pool memcache_servers=memcached-0.memcached.watcher-kuttl-default.svc:11212 memcache_socket_timeout = 0.5 memcache_pool_connection_get_timeout = 1 enabled=true tls_enabled=true memcache_dead_retry = 30 '[prometheus_client]' host = metric-storage-prometheus.watcher-kuttl-default.svc port = 9090 cafile = /etc/pki/ca-trust/extracted/pem/prometheus/ca.crt '[cinder_client]' endpoint_type = internal '[glance_client]' endpoint_type = internal '[ironic_client]' endpoint_type = internal '[keystone_client]' interface = internal '[neutron_client]' endpoint_type = internal '[nova_client]' endpoint_type = internal '[placement_client]' interface = internal '[watcher_cluster_data_model_collectors.compute]' period = 900 '[watcher_cluster_data_model_collectors.baremetal]' period = 900 '[watcher_cluster_data_model_collectors.storage]' period = 900\n logger.go:42: 06:59:17 | watcher-notification/1-deploy-with-notification | + '[' 1 == 1 ']'\n logger.go:42: 06:59:18 | watcher-notification/1-deploy-with-notification | running command: [sh -c set -euxo pipefail\n oc project watcher-kuttl-default\n \ APIPOD=$(oc get pods -n watcher-kuttl-default -l \"service=watcher-api\" -ocustom-columns=:metadata.name|grep -v ^$|head -1)\n if [ -n \"${APIPOD}\" ]; then\n [ $(echo $(oc rsh -c watcher-api ${APIPOD} cat /etc/watcher/watcher.conf.d/00-default.conf) |grep -czPo '\\[oslo_messaging_notifications\\]\\s+driver\\s+=\\s+messagingv2\\s+transport_url\\s+=') == 1 ]\n else\n exit 1\n fi\n ]\n logger.go:42: 06:59:18 | watcher-notification/1-deploy-with-notification | + oc project watcher-kuttl-default\n \ logger.go:42: 06:59:18 | watcher-notification/1-deploy-with-notification | Already on project \"watcher-kuttl-default\" on server \"https://api.crc.testing:6443\".\n \ logger.go:42: 06:59:18 | watcher-notification/1-deploy-with-notification | ++ oc get pods -n watcher-kuttl-default -l service=watcher-api -ocustom-columns=:metadata.name\n \ logger.go:42: 06:59:18 | watcher-notification/1-deploy-with-notification | ++ head -1\n logger.go:42: 06:59:18 | watcher-notification/1-deploy-with-notification | ++ grep -v '^$'\n logger.go:42: 06:59:18 | watcher-notification/1-deploy-with-notification | + APIPOD=watcher-kuttl-api-0\n logger.go:42: 06:59:18 | watcher-notification/1-deploy-with-notification | + '[' -n watcher-kuttl-api-0 ']'\n logger.go:42: 06:59:18 | watcher-notification/1-deploy-with-notification | ++ grep -czPo '\\[oslo_messaging_notifications\\]\\s+driver\\s+=\\s+messagingv2\\s+transport_url\\s+='\n \ logger.go:42: 06:59:18 | watcher-notification/1-deploy-with-notification | +++ oc rsh -c watcher-api watcher-kuttl-api-0 cat /etc/watcher/watcher.conf.d/00-default.conf\n \ logger.go:42: 06:59:19 | watcher-notification/1-deploy-with-notification | ++ echo '[DEFAULT]' state_path = /var/lib/watcher transport_url = 'rabbit://**********=1' control_exchange = watcher debug = True log_file = /var/log/watcher/watcher-kuttl-api.log '#' empty notification_level means that no notification will be sent notification_level = '[database]' connection = 'mysql+pymysql://watcher_49c6:586030fce1b9af1d5684354c0bd591b5@openstack.watcher-kuttl-default.svc/watcher?read_default_file=/etc/my.cnf' '[oslo_policy]' policy_file = /etc/watcher/policy.yaml.sample '[oslo_messaging_notifications]' driver = messagingv2 transport_url = 'rabbit://**********=1' '[oslo_messaging_rabbit]' rabbit_quorum_queue=true rabbit_transient_quorum_queue=true amqp_durable_queues=true '[keystone_authtoken]' memcached_servers=memcached-0.memcached.watcher-kuttl-default.svc:11212 project_domain_name = Default project_name = service user_domain_name = Default password = pa**********rd username = watcher auth_url = https://keystone-internal.watcher-kuttl-default.svc:5000 interface = internal auth_type = password cafile = /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem '[watcher_clients_auth]' project_domain_name = Default project_name = service user_domain_name = Default password = pa**********rd username = watcher auth_url = https://keystone-internal.watcher-kuttl-default.svc:5000 interface = internal auth_type = password cafile = /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem '[oslo_concurrency]' lock_path = /var/lib/watcher/tmp '[watcher_datasources]' datasources = prometheus '[cache]' backend = oslo_cache.memcache_pool memcache_servers=memcached-0.memcached.watcher-kuttl-default.svc:11212 memcache_socket_timeout = 0.5 memcache_pool_connection_get_timeout = 1 enabled=true tls_enabled=true memcache_dead_retry = 30 '[prometheus_client]' host = metric-storage-prometheus.watcher-kuttl-default.svc port = 9090 cafile = /etc/pki/ca-trust/extracted/pem/prometheus/ca.crt '[cinder_client]' endpoint_type = internal '[glance_client]' endpoint_type = internal '[ironic_client]' endpoint_type = internal '[keystone_client]' interface = internal '[neutron_client]' endpoint_type = internal '[nova_client]' endpoint_type = internal '[placement_client]' interface = internal '[watcher_cluster_data_model_collectors.compute]' period = 900 '[watcher_cluster_data_model_collectors.baremetal]' period = 900 '[watcher_cluster_data_model_collectors.storage]' period = 900\n logger.go:42: 06:59:19 | watcher-notification/1-deploy-with-notification | + '[' 1 == 1 ']'\n logger.go:42: 06:59:20 | watcher-notification/1-deploy-with-notification | running command: [sh -c set -euxo pipefail\n oc project watcher-kuttl-default\n \ APIPOD=$(oc get pods -n watcher-kuttl-default -l \"service=watcher-api\" -ocustom-columns=:metadata.name|grep -v ^$|head -1)\n if [ -n \"${APIPOD}\" ]; then\n [ $(echo $(oc rsh -c watcher-api ${APIPOD} cat /etc/watcher/watcher.conf.d/00-default.conf) |grep -czPo '\\[oslo_messaging_notifications\\]\\s+driver\\s+=\\s+messagingv2\\s+transport_url\\s+=') == 1 ]\n else\n exit 1\n fi\n ]\n logger.go:42: 06:59:20 | watcher-notification/1-deploy-with-notification | + oc project watcher-kuttl-default\n \ logger.go:42: 06:59:20 | watcher-notification/1-deploy-with-notification | Already on project \"watcher-kuttl-default\" on server \"https://api.crc.testing:6443\".\n \ logger.go:42: 06:59:20 | watcher-notification/1-deploy-with-notification | ++ oc get pods -n watcher-kuttl-default -l service=watcher-api -ocustom-columns=:metadata.name\n \ logger.go:42: 06:59:20 | watcher-notification/1-deploy-with-notification | ++ head -1\n logger.go:42: 06:59:20 | watcher-notification/1-deploy-with-notification | ++ grep -v '^$'\n logger.go:42: 06:59:20 | watcher-notification/1-deploy-with-notification | + APIPOD=watcher-kuttl-api-0\n logger.go:42: 06:59:20 | watcher-notification/1-deploy-with-notification | + '[' -n watcher-kuttl-api-0 ']'\n logger.go:42: 06:59:20 | watcher-notification/1-deploy-with-notification | ++ grep -czPo '\\[oslo_messaging_notifications\\]\\s+driver\\s+=\\s+messagingv2\\s+transport_url\\s+='\n \ logger.go:42: 06:59:20 | watcher-notification/1-deploy-with-notification | +++ oc rsh -c watcher-api watcher-kuttl-api-0 cat /etc/watcher/watcher.conf.d/00-default.conf\n \ logger.go:42: 06:59:20 | watcher-notification/1-deploy-with-notification | ++ echo '[DEFAULT]' state_path = /var/lib/watcher transport_url = 'rabbit://**********=1' control_exchange = watcher debug = True log_file = /var/log/watcher/watcher-kuttl-api.log '#' empty notification_level means that no notification will be sent notification_level = '[database]' connection = 'mysql+pymysql://watcher_49c6:586030fce1b9af1d5684354c0bd591b5@openstack.watcher-kuttl-default.svc/watcher?read_default_file=/etc/my.cnf' '[oslo_policy]' policy_file = /etc/watcher/policy.yaml.sample '[oslo_messaging_notifications]' driver = messagingv2 transport_url = 'rabbit://**********=1' '[oslo_messaging_rabbit]' rabbit_quorum_queue=true rabbit_transient_quorum_queue=true amqp_durable_queues=true '[keystone_authtoken]' memcached_servers=memcached-0.memcached.watcher-kuttl-default.svc:11212 project_domain_name = Default project_name = service user_domain_name = Default password = pa**********rd username = watcher auth_url = https://keystone-internal.watcher-kuttl-default.svc:5000 interface = internal auth_type = password cafile = /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem '[watcher_clients_auth]' project_domain_name = Default project_name = service user_domain_name = Default password = pa**********rd username = watcher auth_url = https://keystone-internal.watcher-kuttl-default.svc:5000 interface = internal auth_type = password cafile = /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem '[oslo_concurrency]' lock_path = /var/lib/watcher/tmp '[watcher_datasources]' datasources = prometheus '[cache]' backend = oslo_cache.memcache_pool memcache_servers=memcached-0.memcached.watcher-kuttl-default.svc:11212 memcache_socket_timeout = 0.5 memcache_pool_connection_get_timeout = 1 enabled=true tls_enabled=true memcache_dead_retry = 30 '[prometheus_client]' host = metric-storage-prometheus.watcher-kuttl-default.svc port = 9090 cafile = /etc/pki/ca-trust/extracted/pem/prometheus/ca.crt '[cinder_client]' endpoint_type = internal '[glance_client]' endpoint_type = internal '[ironic_client]' endpoint_type = internal '[keystone_client]' interface = internal '[neutron_client]' endpoint_type = internal '[nova_client]' endpoint_type = internal '[placement_client]' interface = internal '[watcher_cluster_data_model_collectors.compute]' period = 900 '[watcher_cluster_data_model_collectors.baremetal]' period = 900 '[watcher_cluster_data_model_collectors.storage]' period = 900\n logger.go:42: 06:59:20 | watcher-notification/1-deploy-with-notification | + '[' 1 == 1 ']'\n logger.go:42: 06:59:22 | watcher-notification/1-deploy-with-notification | running command: [sh -c set -euxo pipefail\n oc project watcher-kuttl-default\n \ APIPOD=$(oc get pods -n watcher-kuttl-default -l \"service=watcher-api\" -ocustom-columns=:metadata.name|grep -v ^$|head -1)\n if [ -n \"${APIPOD}\" ]; then\n [ $(echo $(oc rsh -c watcher-api ${APIPOD} cat /etc/watcher/watcher.conf.d/00-default.conf) |grep -czPo '\\[oslo_messaging_notifications\\]\\s+driver\\s+=\\s+messagingv2\\s+transport_url\\s+=') == 1 ]\n else\n exit 1\n fi\n ]\n logger.go:42: 06:59:22 | watcher-notification/1-deploy-with-notification | + oc project watcher-kuttl-default\n \ logger.go:42: 06:59:22 | watcher-notification/1-deploy-with-notification | Already on project \"watcher-kuttl-default\" on server \"https://api.crc.testing:6443\".\n \ logger.go:42: 06:59:22 | watcher-notification/1-deploy-with-notification | ++ oc get pods -n watcher-kuttl-default -l service=watcher-api -ocustom-columns=:metadata.name\n \ logger.go:42: 06:59:22 | watcher-notification/1-deploy-with-notification | ++ head -1\n logger.go:42: 06:59:22 | watcher-notification/1-deploy-with-notification | ++ grep -v '^$'\n logger.go:42: 06:59:22 | watcher-notification/1-deploy-with-notification | + APIPOD=watcher-kuttl-api-0\n logger.go:42: 06:59:22 | watcher-notification/1-deploy-with-notification | + '[' -n watcher-kuttl-api-0 ']'\n logger.go:42: 06:59:22 | watcher-notification/1-deploy-with-notification | +++ oc rsh -c watcher-api watcher-kuttl-api-0 cat /etc/watcher/watcher.conf.d/00-default.conf\n \ logger.go:42: 06:59:22 | watcher-notification/1-deploy-with-notification | ++ grep -czPo '\\[oslo_messaging_notifications\\]\\s+driver\\s+=\\s+messagingv2\\s+transport_url\\s+='\n \ logger.go:42: 06:59:22 | watcher-notification/1-deploy-with-notification | ++ echo '[DEFAULT]' state_path = /var/lib/watcher transport_url = 'rabbit://**********=1' control_exchange = watcher debug = True log_file = /var/log/watcher/watcher-kuttl-api.log '#' empty notification_level means that no notification will be sent notification_level = '[database]' connection = 'mysql+pymysql://watcher_49c6:586030fce1b9af1d5684354c0bd591b5@openstack.watcher-kuttl-default.svc/watcher?read_default_file=/etc/my.cnf' '[oslo_policy]' policy_file = /etc/watcher/policy.yaml.sample '[oslo_messaging_notifications]' driver = messagingv2 transport_url = 'rabbit://**********=1' '[oslo_messaging_rabbit]' rabbit_quorum_queue=true rabbit_transient_quorum_queue=true amqp_durable_queues=true '[keystone_authtoken]' memcached_servers=memcached-0.memcached.watcher-kuttl-default.svc:11212 project_domain_name = Default project_name = service user_domain_name = Default password = pa**********rd username = watcher auth_url = https://keystone-internal.watcher-kuttl-default.svc:5000 interface = internal auth_type = password cafile = /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem '[watcher_clients_auth]' project_domain_name = Default project_name = service user_domain_name = Default password = pa**********rd username = watcher auth_url = https://keystone-internal.watcher-kuttl-default.svc:5000 interface = internal auth_type = password cafile = /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem '[oslo_concurrency]' lock_path = /var/lib/watcher/tmp '[watcher_datasources]' datasources = prometheus '[cache]' backend = oslo_cache.memcache_pool memcache_servers=memcached-0.memcached.watcher-kuttl-default.svc:11212 memcache_socket_timeout = 0.5 memcache_pool_connection_get_timeout = 1 enabled=true tls_enabled=true memcache_dead_retry = 30 '[prometheus_client]' host = metric-storage-prometheus.watcher-kuttl-default.svc port = 9090 cafile = /etc/pki/ca-trust/extracted/pem/prometheus/ca.crt '[cinder_client]' endpoint_type = internal '[glance_client]' endpoint_type = internal '[ironic_client]' endpoint_type = internal '[keystone_client]' interface = internal '[neutron_client]' endpoint_type = internal '[nova_client]' endpoint_type = internal '[placement_client]' interface = internal '[watcher_cluster_data_model_collectors.compute]' period = 900 '[watcher_cluster_data_model_collectors.baremetal]' period = 900 '[watcher_cluster_data_model_collectors.storage]' period = 900\n logger.go:42: 06:59:22 | watcher-notification/1-deploy-with-notification | + '[' 1 == 1 ']'\n logger.go:42: 06:59:22 | watcher-notification/1-deploy-with-notification | test step completed 1-deploy-with-notification\n logger.go:42: 06:59:22 | watcher-notification/2-cleanup-watcher | starting test step 2-cleanup-watcher\n logger.go:42: 06:59:33 | watcher-notification/2-cleanup-watcher | test step completed 2-cleanup-watcher\n logger.go:42: 06:59:33 | watcher-notification | skipping kubernetes event logging\n=== CONT kuttl/harness/watcher-topology\n \ logger.go:42: 06:59:33 | watcher-topology | Skipping creation of user-supplied namespace: watcher-kuttl-default\n logger.go:42: 06:59:33 | watcher-topology/0-cleanup-watcher | starting test step 0-cleanup-watcher\n logger.go:42: 06:59:33 | watcher-topology/0-cleanup-watcher | test step completed 0-cleanup-watcher\n logger.go:42: 06:59:33 | watcher-topology/1-deploy-with-topology | starting test step 1-deploy-with-topology\n logger.go:42: 06:59:33 | watcher-topology/1-deploy-with-topology | Topology:watcher-kuttl-default/watcher-api created\n logger.go:42: 06:59:33 | watcher-topology/1-deploy-with-topology | Watcher:watcher-kuttl-default/watcher-kuttl created\n logger.go:42: 06:59:59 | watcher-topology/1-deploy-with-topology | test step completed 1-deploy-with-topology\n logger.go:42: 06:59:59 | watcher-topology/2-cleanup-watcher | starting test step 2-cleanup-watcher\n logger.go:42: 06:59:59 | watcher-topology/2-cleanup-watcher | running command: [sh -c set -ex\n oc exec -n watcher-kuttl-default openstackclient -- openstack service list -f value -c Name -c Type | [ $(grep -c ^watcher) == 0 ]\n ]\n logger.go:42: 06:59:59 | watcher-topology/2-cleanup-watcher | + oc exec -n watcher-kuttl-default openstackclient -- openstack service list -f value -c Name -c Type\n logger.go:42: 06:59:59 | watcher-topology/2-cleanup-watcher | ++ grep -c '^watcher'\n logger.go:42: 07:00:03 | watcher-topology/2-cleanup-watcher | + '[' 0 == 0 ']'\n logger.go:42: 07:00:04 | watcher-topology/2-cleanup-watcher | running command: [sh -c set -ex\n oc exec -n watcher-kuttl-default openstackclient -- openstack service list -f value -c Name -c Type | [ $(grep -c ^watcher) == 0 ]\n ]\n logger.go:42: 07:00:04 | watcher-topology/2-cleanup-watcher | + oc exec -n watcher-kuttl-default openstackclient -- openstack service list -f value -c Name -c Type\n logger.go:42: 07:00:04 | watcher-topology/2-cleanup-watcher | ++ grep -c '^watcher'\n logger.go:42: 07:00:07 | watcher-topology/2-cleanup-watcher | + '[' 0 == 0 ']'\n logger.go:42: 07:00:07 | watcher-topology/2-cleanup-watcher | test step completed 2-cleanup-watcher\n logger.go:42: 07:00:07 | watcher-topology | skipping kubernetes event logging\n=== CONT kuttl/harness/watcher-tls-certs-change\n \ logger.go:42: 07:00:07 | watcher-tls-certs-change | Skipping creation of user-supplied namespace: watcher-kuttl-default\n logger.go:42: 07:00:07 | watcher-tls-certs-change/0-cleanup-watcher | starting test step 0-cleanup-watcher\n logger.go:42: 07:00:07 | watcher-tls-certs-change/0-cleanup-watcher | test step completed 0-cleanup-watcher\n logger.go:42: 07:00:07 | watcher-tls-certs-change/1-deploy-with-tlse | starting test step 1-deploy-with-tlse\n logger.go:42: 07:00:07 | watcher-tls-certs-change/1-deploy-with-tlse | Certificate:watcher-kuttl-default/watcher-internal-svc created\n logger.go:42: 07:00:07 | watcher-tls-certs-change/1-deploy-with-tlse | Certificate:watcher-kuttl-default/watcher-public-svc created\n logger.go:42: 07:00:07 | watcher-tls-certs-change/1-deploy-with-tlse | Watcher:watcher-kuttl-default/watcher-kuttl created\n logger.go:42: 07:00:07 | watcher-tls-certs-change/1-deploy-with-tlse | running command: [sh -c set -euxo pipefail\n public_svc_cert=$(oc rsh -n $NAMESPACE -c watcher-api watcher-kuttl-api-0 cat /etc/pki/tls/certs/public.crt)\n public_secret_cert=$(oc get -n $NAMESPACE secret cert-watcher-public-svc -o jsonpath='{.data.tls\\.crt}' | base64 --decode)\n \ # ensure that the svc secret and cert secret match\n if [ \"${public_svc_cert}\" != \"${public_secret_cert}\" ]; then\n exit 1\n fi\n \n internal_svc_cert=$(oc rsh -n $NAMESPACE -c watcher-api watcher-kuttl-api-0 cat /etc/pki/tls/certs/internal.crt)\n \ internal_secret_cert=$(oc get -n $NAMESPACE secret cert-watcher-internal-svc -o jsonpath='{.data.tls\\.crt}' | base64 --decode)\n # ensure that the svc secret and cert secret match\n if [ \"${internal_svc_cert}\" != \"${internal_secret_cert}\" ]; then\n exit 1\n fi\n ]\n logger.go:42: 07:00:07 | watcher-tls-certs-change/1-deploy-with-tlse | ++ oc rsh -n watcher-kuttl-default -c watcher-api watcher-kuttl-api-0 cat /etc/pki/tls/certs/public.crt\n \ logger.go:42: 07:00:07 | watcher-tls-certs-change/1-deploy-with-tlse | Error from server (NotFound): pods \"watcher-kuttl-api-0\" not found\n logger.go:42: 07:00:07 | watcher-tls-certs-change/1-deploy-with-tlse | + public_svc_cert=\n logger.go:42: 07:00:08 | watcher-tls-certs-change/1-deploy-with-tlse | running command: [sh -c set -euxo pipefail\n public_svc_cert=$(oc rsh -n $NAMESPACE -c watcher-api watcher-kuttl-api-0 cat /etc/pki/tls/certs/public.crt)\n public_secret_cert=$(oc get -n $NAMESPACE secret cert-watcher-public-svc -o jsonpath='{.data.tls\\.crt}' | base64 --decode)\n # ensure that the svc secret and cert secret match\n \ if [ \"${public_svc_cert}\" != \"${public_secret_cert}\" ]; then\n exit 1\n fi\n \n internal_svc_cert=$(oc rsh -n $NAMESPACE -c watcher-api watcher-kuttl-api-0 cat /etc/pki/tls/certs/internal.crt)\n internal_secret_cert=$(oc get -n $NAMESPACE secret cert-watcher-internal-svc -o jsonpath='{.data.tls\\.crt}' | base64 --decode)\n # ensure that the svc secret and cert secret match\n \ if [ \"${internal_svc_cert}\" != \"${internal_secret_cert}\" ]; then\n exit 1\n fi\n ]\n logger.go:42: 07:00:08 | watcher-tls-certs-change/1-deploy-with-tlse | ++ oc rsh -n watcher-kuttl-default -c watcher-api watcher-kuttl-api-0 cat /etc/pki/tls/certs/public.crt\n \ logger.go:42: 07:00:08 | watcher-tls-certs-change/1-deploy-with-tlse | Error from server (NotFound): pods \"watcher-kuttl-api-0\" not found\n logger.go:42: 07:00:08 | watcher-tls-certs-change/1-deploy-with-tlse | + public_svc_cert=\n logger.go:42: 07:00:09 | watcher-tls-certs-change/1-deploy-with-tlse | running command: [sh -c set -euxo pipefail\n public_svc_cert=$(oc rsh -n $NAMESPACE -c watcher-api watcher-kuttl-api-0 cat /etc/pki/tls/certs/public.crt)\n public_secret_cert=$(oc get -n $NAMESPACE secret cert-watcher-public-svc -o jsonpath='{.data.tls\\.crt}' | base64 --decode)\n # ensure that the svc secret and cert secret match\n \ if [ \"${public_svc_cert}\" != \"${public_secret_cert}\" ]; then\n exit 1\n fi\n \n internal_svc_cert=$(oc rsh -n $NAMESPACE -c watcher-api watcher-kuttl-api-0 cat /etc/pki/tls/certs/internal.crt)\n internal_secret_cert=$(oc get -n $NAMESPACE secret cert-watcher-internal-svc -o jsonpath='{.data.tls\\.crt}' | base64 --decode)\n # ensure that the svc secret and cert secret match\n \ if [ \"${internal_svc_cert}\" != \"${internal_secret_cert}\" ]; then\n exit 1\n fi\n ]\n logger.go:42: 07:00:09 | watcher-tls-certs-change/1-deploy-with-tlse | ++ oc rsh -n watcher-kuttl-default -c watcher-api watcher-kuttl-api-0 cat /etc/pki/tls/certs/public.crt\n \ logger.go:42: 07:00:10 | watcher-tls-certs-change/1-deploy-with-tlse | Error from server (NotFound): pods \"watcher-kuttl-api-0\" not found\n logger.go:42: 07:00:10 | watcher-tls-certs-change/1-deploy-with-tlse | + public_svc_cert=\n logger.go:42: 07:00:11 | watcher-tls-certs-change/1-deploy-with-tlse | running command: [sh -c set -euxo pipefail\n public_svc_cert=$(oc rsh -n $NAMESPACE -c watcher-api watcher-kuttl-api-0 cat /etc/pki/tls/certs/public.crt)\n public_secret_cert=$(oc get -n $NAMESPACE secret cert-watcher-public-svc -o jsonpath='{.data.tls\\.crt}' | base64 --decode)\n # ensure that the svc secret and cert secret match\n \ if [ \"${public_svc_cert}\" != \"${public_secret_cert}\" ]; then\n exit 1\n fi\n \n internal_svc_cert=$(oc rsh -n $NAMESPACE -c watcher-api watcher-kuttl-api-0 cat /etc/pki/tls/certs/internal.crt)\n internal_secret_cert=$(oc get -n $NAMESPACE secret cert-watcher-internal-svc -o jsonpath='{.data.tls\\.crt}' | base64 --decode)\n # ensure that the svc secret and cert secret match\n \ if [ \"${internal_svc_cert}\" != \"${internal_secret_cert}\" ]; then\n exit 1\n fi\n ]\n logger.go:42: 07:00:11 | watcher-tls-certs-change/1-deploy-with-tlse | ++ oc rsh -n watcher-kuttl-default -c watcher-api watcher-kuttl-api-0 cat /etc/pki/tls/certs/public.crt\n \ logger.go:42: 07:00:11 | watcher-tls-certs-change/1-deploy-with-tlse | Error from server (NotFound): pods \"watcher-kuttl-api-0\" not found\n logger.go:42: 07:00:11 | watcher-tls-certs-change/1-deploy-with-tlse | + public_svc_cert=\n logger.go:42: 07:00:12 | watcher-tls-certs-change/1-deploy-with-tlse | running command: [sh -c set -euxo pipefail\n public_svc_cert=$(oc rsh -n $NAMESPACE -c watcher-api watcher-kuttl-api-0 cat /etc/pki/tls/certs/public.crt)\n public_secret_cert=$(oc get -n $NAMESPACE secret cert-watcher-public-svc -o jsonpath='{.data.tls\\.crt}' | base64 --decode)\n # ensure that the svc secret and cert secret match\n \ if [ \"${public_svc_cert}\" != \"${public_secret_cert}\" ]; then\n exit 1\n fi\n \n internal_svc_cert=$(oc rsh -n $NAMESPACE -c watcher-api watcher-kuttl-api-0 cat /etc/pki/tls/certs/internal.crt)\n internal_secret_cert=$(oc get -n $NAMESPACE secret cert-watcher-internal-svc -o jsonpath='{.data.tls\\.crt}' | base64 --decode)\n # ensure that the svc secret and cert secret match\n \ if [ \"${internal_svc_cert}\" != \"${internal_secret_cert}\" ]; then\n exit 1\n fi\n ]\n logger.go:42: 07:00:12 | watcher-tls-certs-change/1-deploy-with-tlse | ++ oc rsh -n watcher-kuttl-default -c watcher-api watcher-kuttl-api-0 cat /etc/pki/tls/certs/public.crt\n \ logger.go:42: 07:00:12 | watcher-tls-certs-change/1-deploy-with-tlse | Error from server (NotFound): pods \"watcher-kuttl-api-0\" not found\n logger.go:42: 07:00:12 | watcher-tls-certs-change/1-deploy-with-tlse | + public_svc_cert=\n logger.go:42: 07:00:13 | watcher-tls-certs-change/1-deploy-with-tlse | running command: [sh -c set -euxo pipefail\n public_svc_cert=$(oc rsh -n $NAMESPACE -c watcher-api watcher-kuttl-api-0 cat /etc/pki/tls/certs/public.crt)\n public_secret_cert=$(oc get -n $NAMESPACE secret cert-watcher-public-svc -o jsonpath='{.data.tls\\.crt}' | base64 --decode)\n # ensure that the svc secret and cert secret match\n \ if [ \"${public_svc_cert}\" != \"${public_secret_cert}\" ]; then\n exit 1\n fi\n \n internal_svc_cert=$(oc rsh -n $NAMESPACE -c watcher-api watcher-kuttl-api-0 cat /etc/pki/tls/certs/internal.crt)\n internal_secret_cert=$(oc get -n $NAMESPACE secret cert-watcher-internal-svc -o jsonpath='{.data.tls\\.crt}' | base64 --decode)\n # ensure that the svc secret and cert secret match\n \ if [ \"${internal_svc_cert}\" != \"${internal_secret_cert}\" ]; then\n exit 1\n fi\n ]\n logger.go:42: 07:00:13 | watcher-tls-certs-change/1-deploy-with-tlse | ++ oc rsh -n watcher-kuttl-default -c watcher-api watcher-kuttl-api-0 cat /etc/pki/tls/certs/public.crt\n \ logger.go:42: 07:00:13 | watcher-tls-certs-change/1-deploy-with-tlse | Error from server (NotFound): pods \"watcher-kuttl-api-0\" not found\n logger.go:42: 07:00:13 | watcher-tls-certs-change/1-deploy-with-tlse | + public_svc_cert=\n logger.go:42: 07:00:14 | watcher-tls-certs-change/1-deploy-with-tlse | running command: [sh -c set -euxo pipefail\n public_svc_cert=$(oc rsh -n $NAMESPACE -c watcher-api watcher-kuttl-api-0 cat /etc/pki/tls/certs/public.crt)\n public_secret_cert=$(oc get -n $NAMESPACE secret cert-watcher-public-svc -o jsonpath='{.data.tls\\.crt}' | base64 --decode)\n # ensure that the svc secret and cert secret match\n \ if [ \"${public_svc_cert}\" != \"${public_secret_cert}\" ]; then\n exit 1\n fi\n \n internal_svc_cert=$(oc rsh -n $NAMESPACE -c watcher-api watcher-kuttl-api-0 cat /etc/pki/tls/certs/internal.crt)\n internal_secret_cert=$(oc get -n $NAMESPACE secret cert-watcher-internal-svc -o jsonpath='{.data.tls\\.crt}' | base64 --decode)\n # ensure that the svc secret and cert secret match\n \ if [ \"${internal_svc_cert}\" != \"${internal_secret_cert}\" ]; then\n exit 1\n fi\n ]\n logger.go:42: 07:00:14 | watcher-tls-certs-change/1-deploy-with-tlse | ++ oc rsh -n watcher-kuttl-default -c watcher-api watcher-kuttl-api-0 cat /etc/pki/tls/certs/public.crt\n \ logger.go:42: 07:00:15 | watcher-tls-certs-change/1-deploy-with-tlse | Error from server (NotFound): pods \"watcher-kuttl-api-0\" not found\n logger.go:42: 07:00:15 | watcher-tls-certs-change/1-deploy-with-tlse | + public_svc_cert=\n logger.go:42: 07:00:16 | watcher-tls-certs-change/1-deploy-with-tlse | running command: [sh -c set -euxo pipefail\n public_svc_cert=$(oc rsh -n $NAMESPACE -c watcher-api watcher-kuttl-api-0 cat /etc/pki/tls/certs/public.crt)\n public_secret_cert=$(oc get -n $NAMESPACE secret cert-watcher-public-svc -o jsonpath='{.data.tls\\.crt}' | base64 --decode)\n # ensure that the svc secret and cert secret match\n \ if [ \"${public_svc_cert}\" != \"${public_secret_cert}\" ]; then\n exit 1\n fi\n \n internal_svc_cert=$(oc rsh -n $NAMESPACE -c watcher-api watcher-kuttl-api-0 cat /etc/pki/tls/certs/internal.crt)\n internal_secret_cert=$(oc get -n $NAMESPACE secret cert-watcher-internal-svc -o jsonpath='{.data.tls\\.crt}' | base64 --decode)\n # ensure that the svc secret and cert secret match\n \ if [ \"${internal_svc_cert}\" != \"${internal_secret_cert}\" ]; then\n exit 1\n fi\n ]\n logger.go:42: 07:00:16 | watcher-tls-certs-change/1-deploy-with-tlse | ++ oc rsh -n watcher-kuttl-default -c watcher-api watcher-kuttl-api-0 cat /etc/pki/tls/certs/public.crt\n \ logger.go:42: 07:00:16 | watcher-tls-certs-change/1-deploy-with-tlse | Error from server (NotFound): pods \"watcher-kuttl-api-0\" not found\n logger.go:42: 07:00:16 | watcher-tls-certs-change/1-deploy-with-tlse | + public_svc_cert=\n logger.go:42: 07:00:17 | watcher-tls-certs-change/1-deploy-with-tlse | running command: [sh -c set -euxo pipefail\n public_svc_cert=$(oc rsh -n $NAMESPACE -c watcher-api watcher-kuttl-api-0 cat /etc/pki/tls/certs/public.crt)\n public_secret_cert=$(oc get -n $NAMESPACE secret cert-watcher-public-svc -o jsonpath='{.data.tls\\.crt}' | base64 --decode)\n # ensure that the svc secret and cert secret match\n \ if [ \"${public_svc_cert}\" != \"${public_secret_cert}\" ]; then\n exit 1\n fi\n \n internal_svc_cert=$(oc rsh -n $NAMESPACE -c watcher-api watcher-kuttl-api-0 cat /etc/pki/tls/certs/internal.crt)\n internal_secret_cert=$(oc get -n $NAMESPACE secret cert-watcher-internal-svc -o jsonpath='{.data.tls\\.crt}' | base64 --decode)\n # ensure that the svc secret and cert secret match\n \ if [ \"${internal_svc_cert}\" != \"${internal_secret_cert}\" ]; then\n exit 1\n fi\n ]\n logger.go:42: 07:00:17 | watcher-tls-certs-change/1-deploy-with-tlse | ++ oc rsh -n watcher-kuttl-default -c watcher-api watcher-kuttl-api-0 cat /etc/pki/tls/certs/public.crt\n \ logger.go:42: 07:00:17 | watcher-tls-certs-change/1-deploy-with-tlse | Error from server (NotFound): pods \"watcher-kuttl-api-0\" not found\n logger.go:42: 07:00:17 | watcher-tls-certs-change/1-deploy-with-tlse | + public_svc_cert=\n logger.go:42: 07:00:18 | watcher-tls-certs-change/1-deploy-with-tlse | running command: [sh -c set -euxo pipefail\n public_svc_cert=$(oc rsh -n $NAMESPACE -c watcher-api watcher-kuttl-api-0 cat /etc/pki/tls/certs/public.crt)\n public_secret_cert=$(oc get -n $NAMESPACE secret cert-watcher-public-svc -o jsonpath='{.data.tls\\.crt}' | base64 --decode)\n # ensure that the svc secret and cert secret match\n \ if [ \"${public_svc_cert}\" != \"${public_secret_cert}\" ]; then\n exit 1\n fi\n \n internal_svc_cert=$(oc rsh -n $NAMESPACE -c watcher-api watcher-kuttl-api-0 cat /etc/pki/tls/certs/internal.crt)\n internal_secret_cert=$(oc get -n $NAMESPACE secret cert-watcher-internal-svc -o jsonpath='{.data.tls\\.crt}' | base64 --decode)\n # ensure that the svc secret and cert secret match\n \ if [ \"${internal_svc_cert}\" != \"${internal_secret_cert}\" ]; then\n exit 1\n fi\n ]\n logger.go:42: 07:00:18 | watcher-tls-certs-change/1-deploy-with-tlse | ++ oc rsh -n watcher-kuttl-default -c watcher-api watcher-kuttl-api-0 cat /etc/pki/tls/certs/public.crt\n \ logger.go:42: 07:00:18 | watcher-tls-certs-change/1-deploy-with-tlse | Error from server (NotFound): pods \"watcher-kuttl-api-0\" not found\n logger.go:42: 07:00:18 | watcher-tls-certs-change/1-deploy-with-tlse | + public_svc_cert=\n logger.go:42: 07:00:19 | watcher-tls-certs-change/1-deploy-with-tlse | running command: [sh -c set -euxo pipefail\n public_svc_cert=$(oc rsh -n $NAMESPACE -c watcher-api watcher-kuttl-api-0 cat /etc/pki/tls/certs/public.crt)\n public_secret_cert=$(oc get -n $NAMESPACE secret cert-watcher-public-svc -o jsonpath='{.data.tls\\.crt}' | base64 --decode)\n # ensure that the svc secret and cert secret match\n \ if [ \"${public_svc_cert}\" != \"${public_secret_cert}\" ]; then\n exit 1\n fi\n \n internal_svc_cert=$(oc rsh -n $NAMESPACE -c watcher-api watcher-kuttl-api-0 cat /etc/pki/tls/certs/internal.crt)\n internal_secret_cert=$(oc get -n $NAMESPACE secret cert-watcher-internal-svc -o jsonpath='{.data.tls\\.crt}' | base64 --decode)\n # ensure that the svc secret and cert secret match\n \ if [ \"${internal_svc_cert}\" != \"${internal_secret_cert}\" ]; then\n exit 1\n fi\n ]\n logger.go:42: 07:00:19 | watcher-tls-certs-change/1-deploy-with-tlse | ++ oc rsh -n watcher-kuttl-default -c watcher-api watcher-kuttl-api-0 cat /etc/pki/tls/certs/public.crt\n \ logger.go:42: 07:00:20 | watcher-tls-certs-change/1-deploy-with-tlse | Error from server (NotFound): pods \"watcher-kuttl-api-0\" not found\n logger.go:42: 07:00:20 | watcher-tls-certs-change/1-deploy-with-tlse | + public_svc_cert=\n logger.go:42: 07:00:21 | watcher-tls-certs-change/1-deploy-with-tlse | running command: [sh -c set -euxo pipefail\n public_svc_cert=$(oc rsh -n $NAMESPACE -c watcher-api watcher-kuttl-api-0 cat /etc/pki/tls/certs/public.crt)\n public_secret_cert=$(oc get -n $NAMESPACE secret cert-watcher-public-svc -o jsonpath='{.data.tls\\.crt}' | base64 --decode)\n # ensure that the svc secret and cert secret match\n \ if [ \"${public_svc_cert}\" != \"${public_secret_cert}\" ]; then\n exit 1\n fi\n \n internal_svc_cert=$(oc rsh -n $NAMESPACE -c watcher-api watcher-kuttl-api-0 cat /etc/pki/tls/certs/internal.crt)\n internal_secret_cert=$(oc get -n $NAMESPACE secret cert-watcher-internal-svc -o jsonpath='{.data.tls\\.crt}' | base64 --decode)\n # ensure that the svc secret and cert secret match\n \ if [ \"${internal_svc_cert}\" != \"${internal_secret_cert}\" ]; then\n exit 1\n fi\n ]\n logger.go:42: 07:00:21 | watcher-tls-certs-change/1-deploy-with-tlse | ++ oc rsh -n watcher-kuttl-default -c watcher-api watcher-kuttl-api-0 cat /etc/pki/tls/certs/public.crt\n \ logger.go:42: 07:00:21 | watcher-tls-certs-change/1-deploy-with-tlse | error: unable to upgrade connection: container not found (\"watcher-api\")\n logger.go:42: 07:00:21 | watcher-tls-certs-change/1-deploy-with-tlse | + public_svc_cert=\n logger.go:42: 07:00:22 | watcher-tls-certs-change/1-deploy-with-tlse | running command: [sh -c set -euxo pipefail\n public_svc_cert=$(oc rsh -n $NAMESPACE -c watcher-api watcher-kuttl-api-0 cat /etc/pki/tls/certs/public.crt)\n public_secret_cert=$(oc get -n $NAMESPACE secret cert-watcher-public-svc -o jsonpath='{.data.tls\\.crt}' | base64 --decode)\n # ensure that the svc secret and cert secret match\n \ if [ \"${public_svc_cert}\" != \"${public_secret_cert}\" ]; then\n exit 1\n fi\n \n internal_svc_cert=$(oc rsh -n $NAMESPACE -c watcher-api watcher-kuttl-api-0 cat /etc/pki/tls/certs/internal.crt)\n internal_secret_cert=$(oc get -n $NAMESPACE secret cert-watcher-internal-svc -o jsonpath='{.data.tls\\.crt}' | base64 --decode)\n # ensure that the svc secret and cert secret match\n \ if [ \"${internal_svc_cert}\" != \"${internal_secret_cert}\" ]; then\n exit 1\n fi\n ]\n logger.go:42: 07:00:22 | watcher-tls-certs-change/1-deploy-with-tlse | ++ oc rsh -n watcher-kuttl-default -c watcher-api watcher-kuttl-api-0 cat /etc/pki/tls/certs/public.crt\n \ logger.go:42: 07:00:22 | watcher-tls-certs-change/1-deploy-with-tlse | + public_svc_cert='-----BEGIN CERTIFICATE-----\n logger.go:42: 07:00:22 | watcher-tls-certs-change/1-deploy-with-tlse | MIIDsjCCAhqgAwIBAgIQFVNmGwlW4BZq0NMjqP97cTANBgkqhkiG9w0BAQwFADAY\n logger.go:42: 07:00:22 | watcher-tls-certs-change/1-deploy-with-tlse | MRYwFAYDVQQDEw1yb290Y2EtcHVibGljMB4XDTI2MDEyMjA3MDAwN1oXDTMxMDEy\n \ logger.go:42: 07:00:22 | watcher-tls-certs-change/1-deploy-with-tlse | MTA3MDAwN1owADCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAMpNXlNH\n \ logger.go:42: 07:00:22 | watcher-tls-certs-change/1-deploy-with-tlse | +PgDkPuiPtY2Iy+iOVTneCp31gtcOAI8JIp8fgCvk7KwdimyxIUQ2cYxsZPnG0cy\n \ logger.go:42: 07:00:22 | watcher-tls-certs-change/1-deploy-with-tlse | wL4XpOQoqp0ElzP3DPP1FUOAnx7ctKdash5DUzBT+84oA2Hab8yd3D9RWVXgYTYP\n \ logger.go:42: 07:00:22 | watcher-tls-certs-change/1-deploy-with-tlse | tgSvfD60T1dJeiiJa5kBj0Gfj+0gKtYNueYrOo65hlC6Smd2tP/7av+xLtQ2b1JS\n \ logger.go:42: 07:00:22 | watcher-tls-certs-change/1-deploy-with-tlse | tFAWNznSY+2n+8SyIH6jjDb/oBYfANPOIcfU/4X8dTGzLgkpMMXK3KyXfzgD9EW+\n \ logger.go:42: 07:00:22 | watcher-tls-certs-change/1-deploy-with-tlse | HexV+da9g5Ix/I5gxku5bd0DWz/ovW+1o/uYMgB5ss9oqd8YKKSqkptXfAtoGToA\n \ logger.go:42: 07:00:22 | watcher-tls-certs-change/1-deploy-with-tlse | KG8QccenVwR7kEcCAwEAAaOBjzCBjDAOBgNVHQ8BAf8EBAMCBaAwEwYDVR0lBAww\n \ logger.go:42: 07:00:22 | watcher-tls-certs-change/1-deploy-with-tlse | CgYIKwYBBQUHAwEwDAYDVR0TAQH/BAIwADAfBgNVHSMEGDAWgBSy3xzAKuvmrfpX\n \ logger.go:42: 07:00:22 | watcher-tls-certs-change/1-deploy-with-tlse | EgmGJVm/ieLg8TA2BgNVHREBAf8ELDAqgih3YXRjaGVyLXB1YmxpYy53YXRjaGVy\n \ logger.go:42: 07:00:22 | watcher-tls-certs-change/1-deploy-with-tlse | LWt1dHRsLWRlZmF1bHQuc3ZjMA0GCSqGSIb3DQEBDAUAA4IBgQBmitaOVryjB1Fy\n \ logger.go:42: 07:00:22 | watcher-tls-certs-change/1-deploy-with-tlse | o+uQ7q1CfFEXu4uk57VffzLi6GFAot2u9gqfKm5Bx/JFKe0HvUHxfyum8oGhg8Mm\n \ logger.go:42: 07:00:22 | watcher-tls-certs-change/1-deploy-with-tlse | lF7bosLeepyWZeFYbXLZ3eCDrTyQRPPc8j6O8R9Ygg2wvIhwIrADK1/KTQuPVFW5\n \ logger.go:42: 07:00:22 | watcher-tls-certs-change/1-deploy-with-tlse | DVaC7MERErbmczp6DYiELdgcqlJimK3Vp/11qNtk1Kc7wVLI2Y36CzhwVMXOBW00\n \ logger.go:42: 07:00:22 | watcher-tls-certs-change/1-deploy-with-tlse | XXQxvYoWLH8WjhVMN9L0ZfTLdQIwL4gt0B3R7Uh8HR/qMCwW4RhIByG/el6wH1yh\n \ logger.go:42: 07:00:22 | watcher-tls-certs-change/1-deploy-with-tlse | D5gkY2Jkbf1vvmmw6MBLkx5O8aL145sW8fDKMGrmRj+eO5ALLeYMdmQZXqwZ4URw\n \ logger.go:42: 07:00:22 | watcher-tls-certs-change/1-deploy-with-tlse | X1tyjzN1ntrcBJf6xnuMLfT1v9RPMCT4CPeb23M5lqvwQXdrbiarkLQ9n/mBPUA6\n \ logger.go:42: 07:00:22 | watcher-tls-certs-change/1-deploy-with-tlse | lUYWBkFqYg9XnAgkWTeehQ7+AeOtvhj6gu3st6nEkOjJs9ri3Jris8kGL9Ay5yMy\n \ logger.go:42: 07:00:22 | watcher-tls-certs-change/1-deploy-with-tlse | af6ByIVzqJuVxr9GJiFIvhx+t2NEelQUiFnN1465Q67x/AELrfo=\n \ logger.go:42: 07:00:22 | watcher-tls-certs-change/1-deploy-with-tlse | -----END CERTIFICATE-----'\n logger.go:42: 07:00:22 | watcher-tls-certs-change/1-deploy-with-tlse | ++ oc get -n watcher-kuttl-default secret cert-watcher-public-svc -o 'jsonpath={.data.tls\\.crt}'\n \ logger.go:42: 07:00:22 | watcher-tls-certs-change/1-deploy-with-tlse | ++ base64 --decode\n logger.go:42: 07:00:23 | watcher-tls-certs-change/1-deploy-with-tlse | + public_secret_cert='-----BEGIN CERTIFICATE-----\n logger.go:42: 07:00:23 | watcher-tls-certs-change/1-deploy-with-tlse | MIIDsjCCAhqgAwIBAgIQFVNmGwlW4BZq0NMjqP97cTANBgkqhkiG9w0BAQwFADAY\n \ logger.go:42: 07:00:23 | watcher-tls-certs-change/1-deploy-with-tlse | MRYwFAYDVQQDEw1yb290Y2EtcHVibGljMB4XDTI2MDEyMjA3MDAwN1oXDTMxMDEy\n \ logger.go:42: 07:00:23 | watcher-tls-certs-change/1-deploy-with-tlse | MTA3MDAwN1owADCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAMpNXlNH\n \ logger.go:42: 07:00:23 | watcher-tls-certs-change/1-deploy-with-tlse | +PgDkPuiPtY2Iy+iOVTneCp31gtcOAI8JIp8fgCvk7KwdimyxIUQ2cYxsZPnG0cy\n \ logger.go:42: 07:00:23 | watcher-tls-certs-change/1-deploy-with-tlse | wL4XpOQoqp0ElzP3DPP1FUOAnx7ctKdash5DUzBT+84oA2Hab8yd3D9RWVXgYTYP\n \ logger.go:42: 07:00:23 | watcher-tls-certs-change/1-deploy-with-tlse | tgSvfD60T1dJeiiJa5kBj0Gfj+0gKtYNueYrOo65hlC6Smd2tP/7av+xLtQ2b1JS\n \ logger.go:42: 07:00:23 | watcher-tls-certs-change/1-deploy-with-tlse | tFAWNznSY+2n+8SyIH6jjDb/oBYfANPOIcfU/4X8dTGzLgkpMMXK3KyXfzgD9EW+\n \ logger.go:42: 07:00:23 | watcher-tls-certs-change/1-deploy-with-tlse | HexV+da9g5Ix/I5gxku5bd0DWz/ovW+1o/uYMgB5ss9oqd8YKKSqkptXfAtoGToA\n \ logger.go:42: 07:00:23 | watcher-tls-certs-change/1-deploy-with-tlse | KG8QccenVwR7kEcCAwEAAaOBjzCBjDAOBgNVHQ8BAf8EBAMCBaAwEwYDVR0lBAww\n \ logger.go:42: 07:00:23 | watcher-tls-certs-change/1-deploy-with-tlse | CgYIKwYBBQUHAwEwDAYDVR0TAQH/BAIwADAfBgNVHSMEGDAWgBSy3xzAKuvmrfpX\n \ logger.go:42: 07:00:23 | watcher-tls-certs-change/1-deploy-with-tlse | EgmGJVm/ieLg8TA2BgNVHREBAf8ELDAqgih3YXRjaGVyLXB1YmxpYy53YXRjaGVy\n \ logger.go:42: 07:00:23 | watcher-tls-certs-change/1-deploy-with-tlse | LWt1dHRsLWRlZmF1bHQuc3ZjMA0GCSqGSIb3DQEBDAUAA4IBgQBmitaOVryjB1Fy\n \ logger.go:42: 07:00:23 | watcher-tls-certs-change/1-deploy-with-tlse | o+uQ7q1CfFEXu4uk57VffzLi6GFAot2u9gqfKm5Bx/JFKe0HvUHxfyum8oGhg8Mm\n \ logger.go:42: 07:00:23 | watcher-tls-certs-change/1-deploy-with-tlse | lF7bosLeepyWZeFYbXLZ3eCDrTyQRPPc8j6O8R9Ygg2wvIhwIrADK1/KTQuPVFW5\n \ logger.go:42: 07:00:23 | watcher-tls-certs-change/1-deploy-with-tlse | DVaC7MERErbmczp6DYiELdgcqlJimK3Vp/11qNtk1Kc7wVLI2Y36CzhwVMXOBW00\n \ logger.go:42: 07:00:23 | watcher-tls-certs-change/1-deploy-with-tlse | XXQxvYoWLH8WjhVMN9L0ZfTLdQIwL4gt0B3R7Uh8HR/qMCwW4RhIByG/el6wH1yh\n \ logger.go:42: 07:00:23 | watcher-tls-certs-change/1-deploy-with-tlse | D5gkY2Jkbf1vvmmw6MBLkx5O8aL145sW8fDKMGrmRj+eO5ALLeYMdmQZXqwZ4URw\n \ logger.go:42: 07:00:23 | watcher-tls-certs-change/1-deploy-with-tlse | X1tyjzN1ntrcBJf6xnuMLfT1v9RPMCT4CPeb23M5lqvwQXdrbiarkLQ9n/mBPUA6\n \ logger.go:42: 07:00:23 | watcher-tls-certs-change/1-deploy-with-tlse | lUYWBkFqYg9XnAgkWTeehQ7+AeOtvhj6gu3st6nEkOjJs9ri3Jris8kGL9Ay5yMy\n \ logger.go:42: 07:00:23 | watcher-tls-certs-change/1-deploy-with-tlse | af6ByIVzqJuVxr9GJiFIvhx+t2NEelQUiFnN1465Q67x/AELrfo=\n \ logger.go:42: 07:00:23 | watcher-tls-certs-change/1-deploy-with-tlse | -----END CERTIFICATE-----'\n logger.go:42: 07:00:23 | watcher-tls-certs-change/1-deploy-with-tlse | + '[' '-----BEGIN CERTIFICATE-----\n logger.go:42: 07:00:23 | watcher-tls-certs-change/1-deploy-with-tlse | MIIDsjCCAhqgAwIBAgIQFVNmGwlW4BZq0NMjqP97cTANBgkqhkiG9w0BAQwFADAY\n logger.go:42: 07:00:23 | watcher-tls-certs-change/1-deploy-with-tlse | MRYwFAYDVQQDEw1yb290Y2EtcHVibGljMB4XDTI2MDEyMjA3MDAwN1oXDTMxMDEy\n \ logger.go:42: 07:00:23 | watcher-tls-certs-change/1-deploy-with-tlse | MTA3MDAwN1owADCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAMpNXlNH\n \ logger.go:42: 07:00:23 | watcher-tls-certs-change/1-deploy-with-tlse | +PgDkPuiPtY2Iy+iOVTneCp31gtcOAI8JIp8fgCvk7KwdimyxIUQ2cYxsZPnG0cy\n \ logger.go:42: 07:00:23 | watcher-tls-certs-change/1-deploy-with-tlse | wL4XpOQoqp0ElzP3DPP1FUOAnx7ctKdash5DUzBT+84oA2Hab8yd3D9RWVXgYTYP\n \ logger.go:42: 07:00:23 | watcher-tls-certs-change/1-deploy-with-tlse | tgSvfD60T1dJeiiJa5kBj0Gfj+0gKtYNueYrOo65hlC6Smd2tP/7av+xLtQ2b1JS\n \ logger.go:42: 07:00:23 | watcher-tls-certs-change/1-deploy-with-tlse | tFAWNznSY+2n+8SyIH6jjDb/oBYfANPOIcfU/4X8dTGzLgkpMMXK3KyXfzgD9EW+\n \ logger.go:42: 07:00:23 | watcher-tls-certs-change/1-deploy-with-tlse | HexV+da9g5Ix/I5gxku5bd0DWz/ovW+1o/uYMgB5ss9oqd8YKKSqkptXfAtoGToA\n \ logger.go:42: 07:00:23 | watcher-tls-certs-change/1-deploy-with-tlse | KG8QccenVwR7kEcCAwEAAaOBjzCBjDAOBgNVHQ8BAf8EBAMCBaAwEwYDVR0lBAww\n \ logger.go:42: 07:00:23 | watcher-tls-certs-change/1-deploy-with-tlse | CgYIKwYBBQUHAwEwDAYDVR0TAQH/BAIwADAfBgNVHSMEGDAWgBSy3xzAKuvmrfpX\n \ logger.go:42: 07:00:23 | watcher-tls-certs-change/1-deploy-with-tlse | EgmGJVm/ieLg8TA2BgNVHREBAf8ELDAqgih3YXRjaGVyLXB1YmxpYy53YXRjaGVy\n \ logger.go:42: 07:00:23 | watcher-tls-certs-change/1-deploy-with-tlse | LWt1dHRsLWRlZmF1bHQuc3ZjMA0GCSqGSIb3DQEBDAUAA4IBgQBmitaOVryjB1Fy\n \ logger.go:42: 07:00:23 | watcher-tls-certs-change/1-deploy-with-tlse | o+uQ7q1CfFEXu4uk57VffzLi6GFAot2u9gqfKm5Bx/JFKe0HvUHxfyum8oGhg8Mm\n \ logger.go:42: 07:00:23 | watcher-tls-certs-change/1-deploy-with-tlse | lF7bosLeepyWZeFYbXLZ3eCDrTyQRPPc8j6O8R9Ygg2wvIhwIrADK1/KTQuPVFW5\n \ logger.go:42: 07:00:23 | watcher-tls-certs-change/1-deploy-with-tlse | DVaC7MERErbmczp6DYiELdgcqlJimK3Vp/11qNtk1Kc7wVLI2Y36CzhwVMXOBW00\n \ logger.go:42: 07:00:23 | watcher-tls-certs-change/1-deploy-with-tlse | XXQxvYoWLH8WjhVMN9L0ZfTLdQIwL4gt0B3R7Uh8HR/qMCwW4RhIByG/el6wH1yh\n \ logger.go:42: 07:00:23 | watcher-tls-certs-change/1-deploy-with-tlse | D5gkY2Jkbf1vvmmw6MBLkx5O8aL145sW8fDKMGrmRj+eO5ALLeYMdmQZXqwZ4URw\n \ logger.go:42: 07:00:23 | watcher-tls-certs-change/1-deploy-with-tlse | X1tyjzN1ntrcBJf6xnuMLfT1v9RPMCT4CPeb23M5lqvwQXdrbiarkLQ9n/mBPUA6\n \ logger.go:42: 07:00:23 | watcher-tls-certs-change/1-deploy-with-tlse | lUYWBkFqYg9XnAgkWTeehQ7+AeOtvhj6gu3st6nEkOjJs9ri3Jris8kGL9Ay5yMy\n \ logger.go:42: 07:00:23 | watcher-tls-certs-change/1-deploy-with-tlse | af6ByIVzqJuVxr9GJiFIvhx+t2NEelQUiFnN1465Q67x/AELrfo=\n \ logger.go:42: 07:00:23 | watcher-tls-certs-change/1-deploy-with-tlse | -----END CERTIFICATE-----' '!=' '-----BEGIN CERTIFICATE-----\n logger.go:42: 07:00:23 | watcher-tls-certs-change/1-deploy-with-tlse | MIIDsjCCAhqgAwIBAgIQFVNmGwlW4BZq0NMjqP97cTANBgkqhkiG9w0BAQwFADAY\n \ logger.go:42: 07:00:23 | watcher-tls-certs-change/1-deploy-with-tlse | MRYwFAYDVQQDEw1yb290Y2EtcHVibGljMB4XDTI2MDEyMjA3MDAwN1oXDTMxMDEy\n \ logger.go:42: 07:00:23 | watcher-tls-certs-change/1-deploy-with-tlse | MTA3MDAwN1owADCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAMpNXlNH\n \ logger.go:42: 07:00:23 | watcher-tls-certs-change/1-deploy-with-tlse | +PgDkPuiPtY2Iy+iOVTneCp31gtcOAI8JIp8fgCvk7KwdimyxIUQ2cYxsZPnG0cy\n \ logger.go:42: 07:00:23 | watcher-tls-certs-change/1-deploy-with-tlse | wL4XpOQoqp0ElzP3DPP1FUOAnx7ctKdash5DUzBT+84oA2Hab8yd3D9RWVXgYTYP\n \ logger.go:42: 07:00:23 | watcher-tls-certs-change/1-deploy-with-tlse | tgSvfD60T1dJeiiJa5kBj0Gfj+0gKtYNueYrOo65hlC6Smd2tP/7av+xLtQ2b1JS\n \ logger.go:42: 07:00:23 | watcher-tls-certs-change/1-deploy-with-tlse | tFAWNznSY+2n+8SyIH6jjDb/oBYfANPOIcfU/4X8dTGzLgkpMMXK3KyXfzgD9EW+\n \ logger.go:42: 07:00:23 | watcher-tls-certs-change/1-deploy-with-tlse | HexV+da9g5Ix/I5gxku5bd0DWz/ovW+1o/uYMgB5ss9oqd8YKKSqkptXfAtoGToA\n \ logger.go:42: 07:00:23 | watcher-tls-certs-change/1-deploy-with-tlse | KG8QccenVwR7kEcCAwEAAaOBjzCBjDAOBgNVHQ8BAf8EBAMCBaAwEwYDVR0lBAww\n \ logger.go:42: 07:00:23 | watcher-tls-certs-change/1-deploy-with-tlse | CgYIKwYBBQUHAwEwDAYDVR0TAQH/BAIwADAfBgNVHSMEGDAWgBSy3xzAKuvmrfpX\n \ logger.go:42: 07:00:23 | watcher-tls-certs-change/1-deploy-with-tlse | EgmGJVm/ieLg8TA2BgNVHREBAf8ELDAqgih3YXRjaGVyLXB1YmxpYy53YXRjaGVy\n \ logger.go:42: 07:00:23 | watcher-tls-certs-change/1-deploy-with-tlse | LWt1dHRsLWRlZmF1bHQuc3ZjMA0GCSqGSIb3DQEBDAUAA4IBgQBmitaOVryjB1Fy\n \ logger.go:42: 07:00:23 | watcher-tls-certs-change/1-deploy-with-tlse | o+uQ7q1CfFEXu4uk57VffzLi6GFAot2u9gqfKm5Bx/JFKe0HvUHxfyum8oGhg8Mm\n \ logger.go:42: 07:00:23 | watcher-tls-certs-change/1-deploy-with-tlse | lF7bosLeepyWZeFYbXLZ3eCDrTyQRPPc8j6O8R9Ygg2wvIhwIrADK1/KTQuPVFW5\n \ logger.go:42: 07:00:23 | watcher-tls-certs-change/1-deploy-with-tlse | DVaC7MERErbmczp6DYiELdgcqlJimK3Vp/11qNtk1Kc7wVLI2Y36CzhwVMXOBW00\n \ logger.go:42: 07:00:23 | watcher-tls-certs-change/1-deploy-with-tlse | XXQxvYoWLH8WjhVMN9L0ZfTLdQIwL4gt0B3R7Uh8HR/qMCwW4RhIByG/el6wH1yh\n \ logger.go:42: 07:00:23 | watcher-tls-certs-change/1-deploy-with-tlse | D5gkY2Jkbf1vvmmw6MBLkx5O8aL145sW8fDKMGrmRj+eO5ALLeYMdmQZXqwZ4URw\n \ logger.go:42: 07:00:23 | watcher-tls-certs-change/1-deploy-with-tlse | X1tyjzN1ntrcBJf6xnuMLfT1v9RPMCT4CPeb23M5lqvwQXdrbiarkLQ9n/mBPUA6\n \ logger.go:42: 07:00:23 | watcher-tls-certs-change/1-deploy-with-tlse | lUYWBkFqYg9XnAgkWTeehQ7+AeOtvhj6gu3st6nEkOjJs9ri3Jris8kGL9Ay5yMy\n \ logger.go:42: 07:00:23 | watcher-tls-certs-change/1-deploy-with-tlse | af6ByIVzqJuVxr9GJiFIvhx+t2NEelQUiFnN1465Q67x/AELrfo=\n \ logger.go:42: 07:00:23 | watcher-tls-certs-change/1-deploy-with-tlse | -----END CERTIFICATE-----' ']'\n logger.go:42: 07:00:23 | watcher-tls-certs-change/1-deploy-with-tlse | ++ oc rsh -n watcher-kuttl-default -c watcher-api watcher-kuttl-api-0 cat /etc/pki/tls/certs/internal.crt\n \ logger.go:42: 07:00:23 | watcher-tls-certs-change/1-deploy-with-tlse | + internal_svc_cert='-----BEGIN CERTIFICATE-----\n logger.go:42: 07:00:23 | watcher-tls-certs-change/1-deploy-with-tlse | MIIDtzCCAh+gAwIBAgIRAMdtA66/UVErgQ6jB2Mo7g8wDQYJKoZIhvcNAQEMBQAw\n logger.go:42: 07:00:23 | watcher-tls-certs-change/1-deploy-with-tlse | GjEYMBYGA1UEAxMPcm9vdGNhLWludGVybmFsMB4XDTI2MDEyMjA3MDAwN1oXDTMx\n \ logger.go:42: 07:00:23 | watcher-tls-certs-change/1-deploy-with-tlse | MDEyMTA3MDAwN1owADCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAOzY\n \ logger.go:42: 07:00:23 | watcher-tls-certs-change/1-deploy-with-tlse | hUVeRLlVB60sJbdpaQAJI2vpbeEEDa91MsWoYlWF58nvtb17k8KGvpPNemDF4UvD\n \ logger.go:42: 07:00:23 | watcher-tls-certs-change/1-deploy-with-tlse | kRm+oSlbUhWwlbwI1Q3VFz4JhvU12PvdUhV4e5PbNGULUDGpAbr04w098HEQSUj+\n \ logger.go:42: 07:00:23 | watcher-tls-certs-change/1-deploy-with-tlse | D/44rQFA0PbvLa//wISJcF6NG2kZiiAFAlrfYJH5tJM2nbdDOvHRY2FScNwLw3aE\n \ logger.go:42: 07:00:23 | watcher-tls-certs-change/1-deploy-with-tlse | 1UGIp9tmMA1on0l+9tBcndNzSq8klq5TdwDo8Pwrqz8m37ZXvwD1gizzDXwZhiPN\n \ logger.go:42: 07:00:23 | watcher-tls-certs-change/1-deploy-with-tlse | KIvLKp0hAxvswvj5Rib80zGQX6JjJEcOBFZ4GT/e8WUjrsJ6JsTygAtF9Y8xwchO\n \ logger.go:42: 07:00:23 | watcher-tls-certs-change/1-deploy-with-tlse | Ce4xdKeeGZ8hDK2ojFcCAwEAAaOBkTCBjjAOBgNVHQ8BAf8EBAMCBaAwEwYDVR0l\n \ logger.go:42: 07:00:23 | watcher-tls-certs-change/1-deploy-with-tlse | BAwwCgYIKwYBBQUHAwEwDAYDVR0TAQH/BAIwADAfBgNVHSMEGDAWgBSEvgxWYqXo\n \ logger.go:42: 07:00:23 | watcher-tls-certs-change/1-deploy-with-tlse | VaasRUQLJzZAsO/HJTA4BgNVHREBAf8ELjAsgip3YXRjaGVyLWludGVybmFsLndh\n \ logger.go:42: 07:00:23 | watcher-tls-certs-change/1-deploy-with-tlse | dGNoZXIta3V0dGwtZGVmYXVsdC5zdmMwDQYJKoZIhvcNAQEMBQADggGBAJzAl8kI\n \ logger.go:42: 07:00:23 | watcher-tls-certs-change/1-deploy-with-tlse | fRCmnFBnPIq9+3XfefG0pybCCtUusBuoATJYDHmH/379Rb4zeN6bqXg7PP36qVao\n \ logger.go:42: 07:00:23 | watcher-tls-certs-change/1-deploy-with-tlse | 98lAd5LFEieOBPJq2+QXo8CWrLcbItzbOHdFeVStfpuQqaD2OMYLLepP/IpAKT+0\n \ logger.go:42: 07:00:23 | watcher-tls-certs-change/1-deploy-with-tlse | +GEHqjJsaHF3xj6g5HcvQzhSaQh3Pg8ffKmqJKQg95PF1BPQ110MawKYh9MBibPr\n \ logger.go:42: 07:00:23 | watcher-tls-certs-change/1-deploy-with-tlse | b0myrALIel2CeIeWqm+VMrMI8en/gFxJWTfg58GcQbRiUDzWa2BHE1mEoXmOLNDt\n \ logger.go:42: 07:00:23 | watcher-tls-certs-change/1-deploy-with-tlse | L1iYWnFugdfAycs2+uAkPMbpav3Oi3WJX1N1iKRklNv5Cq8oQ++b8aoR2RZPnJJE\n \ logger.go:42: 07:00:23 | watcher-tls-certs-change/1-deploy-with-tlse | ppmkEBU4Kt94NO5B/A2PnjkM8+yOQjhjVnyD5Ccp8kHx0MN7H1OW9J0m4JEPjP/c\n \ logger.go:42: 07:00:23 | watcher-tls-certs-change/1-deploy-with-tlse | uRTxWb4yoJl6Igdfy3tYgldteKztKgf4tDlkR/Htsf8GVL+zbsnTseYY5BakyygP\n \ logger.go:42: 07:00:23 | watcher-tls-certs-change/1-deploy-with-tlse | AHnxS0D1QCA5Zv5craVd+a2ZCNvbnSdl4pwShL796A0cNZGnAKDD1LwcYQ==\n \ logger.go:42: 07:00:23 | watcher-tls-certs-change/1-deploy-with-tlse | -----END CERTIFICATE-----'\n logger.go:42: 07:00:23 | watcher-tls-certs-change/1-deploy-with-tlse | ++ base64 --decode\n logger.go:42: 07:00:23 | watcher-tls-certs-change/1-deploy-with-tlse | ++ oc get -n watcher-kuttl-default secret cert-watcher-internal-svc -o 'jsonpath={.data.tls\\.crt}'\n \ logger.go:42: 07:00:23 | watcher-tls-certs-change/1-deploy-with-tlse | + internal_secret_cert='-----BEGIN CERTIFICATE-----\n logger.go:42: 07:00:23 | watcher-tls-certs-change/1-deploy-with-tlse | MIIDtzCCAh+gAwIBAgIRAMdtA66/UVErgQ6jB2Mo7g8wDQYJKoZIhvcNAQEMBQAw\n logger.go:42: 07:00:23 | watcher-tls-certs-change/1-deploy-with-tlse | GjEYMBYGA1UEAxMPcm9vdGNhLWludGVybmFsMB4XDTI2MDEyMjA3MDAwN1oXDTMx\n \ logger.go:42: 07:00:23 | watcher-tls-certs-change/1-deploy-with-tlse | MDEyMTA3MDAwN1owADCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAOzY\n \ logger.go:42: 07:00:23 | watcher-tls-certs-change/1-deploy-with-tlse | hUVeRLlVB60sJbdpaQAJI2vpbeEEDa91MsWoYlWF58nvtb17k8KGvpPNemDF4UvD\n \ logger.go:42: 07:00:23 | watcher-tls-certs-change/1-deploy-with-tlse | kRm+oSlbUhWwlbwI1Q3VFz4JhvU12PvdUhV4e5PbNGULUDGpAbr04w098HEQSUj+\n \ logger.go:42: 07:00:23 | watcher-tls-certs-change/1-deploy-with-tlse | D/44rQFA0PbvLa//wISJcF6NG2kZiiAFAlrfYJH5tJM2nbdDOvHRY2FScNwLw3aE\n \ logger.go:42: 07:00:23 | watcher-tls-certs-change/1-deploy-with-tlse | 1UGIp9tmMA1on0l+9tBcndNzSq8klq5TdwDo8Pwrqz8m37ZXvwD1gizzDXwZhiPN\n \ logger.go:42: 07:00:23 | watcher-tls-certs-change/1-deploy-with-tlse | KIvLKp0hAxvswvj5Rib80zGQX6JjJEcOBFZ4GT/e8WUjrsJ6JsTygAtF9Y8xwchO\n \ logger.go:42: 07:00:23 | watcher-tls-certs-change/1-deploy-with-tlse | Ce4xdKeeGZ8hDK2ojFcCAwEAAaOBkTCBjjAOBgNVHQ8BAf8EBAMCBaAwEwYDVR0l\n \ logger.go:42: 07:00:23 | watcher-tls-certs-change/1-deploy-with-tlse | BAwwCgYIKwYBBQUHAwEwDAYDVR0TAQH/BAIwADAfBgNVHSMEGDAWgBSEvgxWYqXo\n \ logger.go:42: 07:00:23 | watcher-tls-certs-change/1-deploy-with-tlse | VaasRUQLJzZAsO/HJTA4BgNVHREBAf8ELjAsgip3YXRjaGVyLWludGVybmFsLndh\n \ logger.go:42: 07:00:23 | watcher-tls-certs-change/1-deploy-with-tlse | dGNoZXIta3V0dGwtZGVmYXVsdC5zdmMwDQYJKoZIhvcNAQEMBQADggGBAJzAl8kI\n \ logger.go:42: 07:00:23 | watcher-tls-certs-change/1-deploy-with-tlse | fRCmnFBnPIq9+3XfefG0pybCCtUusBuoATJYDHmH/379Rb4zeN6bqXg7PP36qVao\n \ logger.go:42: 07:00:23 | watcher-tls-certs-change/1-deploy-with-tlse | 98lAd5LFEieOBPJq2+QXo8CWrLcbItzbOHdFeVStfpuQqaD2OMYLLepP/IpAKT+0\n \ logger.go:42: 07:00:23 | watcher-tls-certs-change/1-deploy-with-tlse | +GEHqjJsaHF3xj6g5HcvQzhSaQh3Pg8ffKmqJKQg95PF1BPQ110MawKYh9MBibPr\n \ logger.go:42: 07:00:23 | watcher-tls-certs-change/1-deploy-with-tlse | b0myrALIel2CeIeWqm+VMrMI8en/gFxJWTfg58GcQbRiUDzWa2BHE1mEoXmOLNDt\n \ logger.go:42: 07:00:23 | watcher-tls-certs-change/1-deploy-with-tlse | L1iYWnFugdfAycs2+uAkPMbpav3Oi3WJX1N1iKRklNv5Cq8oQ++b8aoR2RZPnJJE\n \ logger.go:42: 07:00:23 | watcher-tls-certs-change/1-deploy-with-tlse | ppmkEBU4Kt94NO5B/A2PnjkM8+yOQjhjVnyD5Ccp8kHx0MN7H1OW9J0m4JEPjP/c\n \ logger.go:42: 07:00:23 | watcher-tls-certs-change/1-deploy-with-tlse | uRTxWb4yoJl6Igdfy3tYgldteKztKgf4tDlkR/Htsf8GVL+zbsnTseYY5BakyygP\n \ logger.go:42: 07:00:23 | watcher-tls-certs-change/1-deploy-with-tlse | AHnxS0D1QCA5Zv5craVd+a2ZCNvbnSdl4pwShL796A0cNZGnAKDD1LwcYQ==\n \ logger.go:42: 07:00:23 | watcher-tls-certs-change/1-deploy-with-tlse | -----END CERTIFICATE-----'\n logger.go:42: 07:00:23 | watcher-tls-certs-change/1-deploy-with-tlse | + '[' '-----BEGIN CERTIFICATE-----\n logger.go:42: 07:00:23 | watcher-tls-certs-change/1-deploy-with-tlse | MIIDtzCCAh+gAwIBAgIRAMdtA66/UVErgQ6jB2Mo7g8wDQYJKoZIhvcNAQEMBQAw\n logger.go:42: 07:00:23 | watcher-tls-certs-change/1-deploy-with-tlse | GjEYMBYGA1UEAxMPcm9vdGNhLWludGVybmFsMB4XDTI2MDEyMjA3MDAwN1oXDTMx\n \ logger.go:42: 07:00:23 | watcher-tls-certs-change/1-deploy-with-tlse | MDEyMTA3MDAwN1owADCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAOzY\n \ logger.go:42: 07:00:23 | watcher-tls-certs-change/1-deploy-with-tlse | hUVeRLlVB60sJbdpaQAJI2vpbeEEDa91MsWoYlWF58nvtb17k8KGvpPNemDF4UvD\n \ logger.go:42: 07:00:23 | watcher-tls-certs-change/1-deploy-with-tlse | kRm+oSlbUhWwlbwI1Q3VFz4JhvU12PvdUhV4e5PbNGULUDGpAbr04w098HEQSUj+\n \ logger.go:42: 07:00:23 | watcher-tls-certs-change/1-deploy-with-tlse | D/44rQFA0PbvLa//wISJcF6NG2kZiiAFAlrfYJH5tJM2nbdDOvHRY2FScNwLw3aE\n \ logger.go:42: 07:00:23 | watcher-tls-certs-change/1-deploy-with-tlse | 1UGIp9tmMA1on0l+9tBcndNzSq8klq5TdwDo8Pwrqz8m37ZXvwD1gizzDXwZhiPN\n \ logger.go:42: 07:00:23 | watcher-tls-certs-change/1-deploy-with-tlse | KIvLKp0hAxvswvj5Rib80zGQX6JjJEcOBFZ4GT/e8WUjrsJ6JsTygAtF9Y8xwchO\n \ logger.go:42: 07:00:23 | watcher-tls-certs-change/1-deploy-with-tlse | Ce4xdKeeGZ8hDK2ojFcCAwEAAaOBkTCBjjAOBgNVHQ8BAf8EBAMCBaAwEwYDVR0l\n \ logger.go:42: 07:00:23 | watcher-tls-certs-change/1-deploy-with-tlse | BAwwCgYIKwYBBQUHAwEwDAYDVR0TAQH/BAIwADAfBgNVHSMEGDAWgBSEvgxWYqXo\n \ logger.go:42: 07:00:23 | watcher-tls-certs-change/1-deploy-with-tlse | VaasRUQLJzZAsO/HJTA4BgNVHREBAf8ELjAsgip3YXRjaGVyLWludGVybmFsLndh\n \ logger.go:42: 07:00:23 | watcher-tls-certs-change/1-deploy-with-tlse | dGNoZXIta3V0dGwtZGVmYXVsdC5zdmMwDQYJKoZIhvcNAQEMBQADggGBAJzAl8kI\n \ logger.go:42: 07:00:23 | watcher-tls-certs-change/1-deploy-with-tlse | fRCmnFBnPIq9+3XfefG0pybCCtUusBuoATJYDHmH/379Rb4zeN6bqXg7PP36qVao\n \ logger.go:42: 07:00:23 | watcher-tls-certs-change/1-deploy-with-tlse | 98lAd5LFEieOBPJq2+QXo8CWrLcbItzbOHdFeVStfpuQqaD2OMYLLepP/IpAKT+0\n \ logger.go:42: 07:00:23 | watcher-tls-certs-change/1-deploy-with-tlse | +GEHqjJsaHF3xj6g5HcvQzhSaQh3Pg8ffKmqJKQg95PF1BPQ110MawKYh9MBibPr\n \ logger.go:42: 07:00:23 | watcher-tls-certs-change/1-deploy-with-tlse | b0myrALIel2CeIeWqm+VMrMI8en/gFxJWTfg58GcQbRiUDzWa2BHE1mEoXmOLNDt\n \ logger.go:42: 07:00:23 | watcher-tls-certs-change/1-deploy-with-tlse | L1iYWnFugdfAycs2+uAkPMbpav3Oi3WJX1N1iKRklNv5Cq8oQ++b8aoR2RZPnJJE\n \ logger.go:42: 07:00:23 | watcher-tls-certs-change/1-deploy-with-tlse | ppmkEBU4Kt94NO5B/A2PnjkM8+yOQjhjVnyD5Ccp8kHx0MN7H1OW9J0m4JEPjP/c\n \ logger.go:42: 07:00:23 | watcher-tls-certs-change/1-deploy-with-tlse | uRTxWb4yoJl6Igdfy3tYgldteKztKgf4tDlkR/Htsf8GVL+zbsnTseYY5BakyygP\n \ logger.go:42: 07:00:23 | watcher-tls-certs-change/1-deploy-with-tlse | AHnxS0D1QCA5Zv5craVd+a2ZCNvbnSdl4pwShL796A0cNZGnAKDD1LwcYQ==\n \ logger.go:42: 07:00:23 | watcher-tls-certs-change/1-deploy-with-tlse | -----END CERTIFICATE-----' '!=' '-----BEGIN CERTIFICATE-----\n logger.go:42: 07:00:23 | watcher-tls-certs-change/1-deploy-with-tlse | MIIDtzCCAh+gAwIBAgIRAMdtA66/UVErgQ6jB2Mo7g8wDQYJKoZIhvcNAQEMBQAw\n \ logger.go:42: 07:00:23 | watcher-tls-certs-change/1-deploy-with-tlse | GjEYMBYGA1UEAxMPcm9vdGNhLWludGVybmFsMB4XDTI2MDEyMjA3MDAwN1oXDTMx\n \ logger.go:42: 07:00:23 | watcher-tls-certs-change/1-deploy-with-tlse | MDEyMTA3MDAwN1owADCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAOzY\n \ logger.go:42: 07:00:23 | watcher-tls-certs-change/1-deploy-with-tlse | hUVeRLlVB60sJbdpaQAJI2vpbeEEDa91MsWoYlWF58nvtb17k8KGvpPNemDF4UvD\n \ logger.go:42: 07:00:23 | watcher-tls-certs-change/1-deploy-with-tlse | kRm+oSlbUhWwlbwI1Q3VFz4JhvU12PvdUhV4e5PbNGULUDGpAbr04w098HEQSUj+\n \ logger.go:42: 07:00:23 | watcher-tls-certs-change/1-deploy-with-tlse | D/44rQFA0PbvLa//wISJcF6NG2kZiiAFAlrfYJH5tJM2nbdDOvHRY2FScNwLw3aE\n \ logger.go:42: 07:00:23 | watcher-tls-certs-change/1-deploy-with-tlse | 1UGIp9tmMA1on0l+9tBcndNzSq8klq5TdwDo8Pwrqz8m37ZXvwD1gizzDXwZhiPN\n \ logger.go:42: 07:00:23 | watcher-tls-certs-change/1-deploy-with-tlse | KIvLKp0hAxvswvj5Rib80zGQX6JjJEcOBFZ4GT/e8WUjrsJ6JsTygAtF9Y8xwchO\n \ logger.go:42: 07:00:23 | watcher-tls-certs-change/1-deploy-with-tlse | Ce4xdKeeGZ8hDK2ojFcCAwEAAaOBkTCBjjAOBgNVHQ8BAf8EBAMCBaAwEwYDVR0l\n \ logger.go:42: 07:00:23 | watcher-tls-certs-change/1-deploy-with-tlse | BAwwCgYIKwYBBQUHAwEwDAYDVR0TAQH/BAIwADAfBgNVHSMEGDAWgBSEvgxWYqXo\n \ logger.go:42: 07:00:23 | watcher-tls-certs-change/1-deploy-with-tlse | VaasRUQLJzZAsO/HJTA4BgNVHREBAf8ELjAsgip3YXRjaGVyLWludGVybmFsLndh\n \ logger.go:42: 07:00:23 | watcher-tls-certs-change/1-deploy-with-tlse | dGNoZXIta3V0dGwtZGVmYXVsdC5zdmMwDQYJKoZIhvcNAQEMBQADggGBAJzAl8kI\n \ logger.go:42: 07:00:23 | watcher-tls-certs-change/1-deploy-with-tlse | fRCmnFBnPIq9+3XfefG0pybCCtUusBuoATJYDHmH/379Rb4zeN6bqXg7PP36qVao\n \ logger.go:42: 07:00:23 | watcher-tls-certs-change/1-deploy-with-tlse | 98lAd5LFEieOBPJq2+QXo8CWrLcbItzbOHdFeVStfpuQqaD2OMYLLepP/IpAKT+0\n \ logger.go:42: 07:00:23 | watcher-tls-certs-change/1-deploy-with-tlse | +GEHqjJsaHF3xj6g5HcvQzhSaQh3Pg8ffKmqJKQg95PF1BPQ110MawKYh9MBibPr\n \ logger.go:42: 07:00:23 | watcher-tls-certs-change/1-deploy-with-tlse | b0myrALIel2CeIeWqm+VMrMI8en/gFxJWTfg58GcQbRiUDzWa2BHE1mEoXmOLNDt\n \ logger.go:42: 07:00:23 | watcher-tls-certs-change/1-deploy-with-tlse | L1iYWnFugdfAycs2+uAkPMbpav3Oi3WJX1N1iKRklNv5Cq8oQ++b8aoR2RZPnJJE\n \ logger.go:42: 07:00:23 | watcher-tls-certs-change/1-deploy-with-tlse | ppmkEBU4Kt94NO5B/A2PnjkM8+yOQjhjVnyD5Ccp8kHx0MN7H1OW9J0m4JEPjP/c\n \ logger.go:42: 07:00:23 | watcher-tls-certs-change/1-deploy-with-tlse | uRTxWb4yoJl6Igdfy3tYgldteKztKgf4tDlkR/Htsf8GVL+zbsnTseYY5BakyygP\n \ logger.go:42: 07:00:23 | watcher-tls-certs-change/1-deploy-with-tlse | AHnxS0D1QCA5Zv5craVd+a2ZCNvbnSdl4pwShL796A0cNZGnAKDD1LwcYQ==\n \ logger.go:42: 07:00:23 | watcher-tls-certs-change/1-deploy-with-tlse | -----END CERTIFICATE-----' ']'\n logger.go:42: 07:00:24 | watcher-tls-certs-change/1-deploy-with-tlse | running command: [sh -c set -euxo pipefail\n public_svc_cert=$(oc rsh -n $NAMESPACE -c watcher-api watcher-kuttl-api-0 cat /etc/pki/tls/certs/public.crt)\n \ public_secret_cert=$(oc get -n $NAMESPACE secret cert-watcher-public-svc -o jsonpath='{.data.tls\\.crt}' | base64 --decode)\n # ensure that the svc secret and cert secret match\n if [ \"${public_svc_cert}\" != \"${public_secret_cert}\" ]; then\n exit 1\n fi\n \n internal_svc_cert=$(oc rsh -n $NAMESPACE -c watcher-api watcher-kuttl-api-0 cat /etc/pki/tls/certs/internal.crt)\n \ internal_secret_cert=$(oc get -n $NAMESPACE secret cert-watcher-internal-svc -o jsonpath='{.data.tls\\.crt}' | base64 --decode)\n # ensure that the svc secret and cert secret match\n if [ \"${internal_svc_cert}\" != \"${internal_secret_cert}\" ]; then\n exit 1\n fi\n ]\n logger.go:42: 07:00:24 | watcher-tls-certs-change/1-deploy-with-tlse | ++ oc rsh -n watcher-kuttl-default -c watcher-api watcher-kuttl-api-0 cat /etc/pki/tls/certs/public.crt\n \ logger.go:42: 07:00:24 | watcher-tls-certs-change/1-deploy-with-tlse | + public_svc_cert='-----BEGIN CERTIFICATE-----\n logger.go:42: 07:00:24 | watcher-tls-certs-change/1-deploy-with-tlse | MIIDsjCCAhqgAwIBAgIQFVNmGwlW4BZq0NMjqP97cTANBgkqhkiG9w0BAQwFADAY\n logger.go:42: 07:00:24 | watcher-tls-certs-change/1-deploy-with-tlse | MRYwFAYDVQQDEw1yb290Y2EtcHVibGljMB4XDTI2MDEyMjA3MDAwN1oXDTMxMDEy\n \ logger.go:42: 07:00:24 | watcher-tls-certs-change/1-deploy-with-tlse | MTA3MDAwN1owADCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAMpNXlNH\n \ logger.go:42: 07:00:24 | watcher-tls-certs-change/1-deploy-with-tlse | +PgDkPuiPtY2Iy+iOVTneCp31gtcOAI8JIp8fgCvk7KwdimyxIUQ2cYxsZPnG0cy\n \ logger.go:42: 07:00:24 | watcher-tls-certs-change/1-deploy-with-tlse | wL4XpOQoqp0ElzP3DPP1FUOAnx7ctKdash5DUzBT+84oA2Hab8yd3D9RWVXgYTYP\n \ logger.go:42: 07:00:24 | watcher-tls-certs-change/1-deploy-with-tlse | tgSvfD60T1dJeiiJa5kBj0Gfj+0gKtYNueYrOo65hlC6Smd2tP/7av+xLtQ2b1JS\n \ logger.go:42: 07:00:24 | watcher-tls-certs-change/1-deploy-with-tlse | tFAWNznSY+2n+8SyIH6jjDb/oBYfANPOIcfU/4X8dTGzLgkpMMXK3KyXfzgD9EW+\n \ logger.go:42: 07:00:24 | watcher-tls-certs-change/1-deploy-with-tlse | HexV+da9g5Ix/I5gxku5bd0DWz/ovW+1o/uYMgB5ss9oqd8YKKSqkptXfAtoGToA\n \ logger.go:42: 07:00:24 | watcher-tls-certs-change/1-deploy-with-tlse | KG8QccenVwR7kEcCAwEAAaOBjzCBjDAOBgNVHQ8BAf8EBAMCBaAwEwYDVR0lBAww\n \ logger.go:42: 07:00:24 | watcher-tls-certs-change/1-deploy-with-tlse | CgYIKwYBBQUHAwEwDAYDVR0TAQH/BAIwADAfBgNVHSMEGDAWgBSy3xzAKuvmrfpX\n \ logger.go:42: 07:00:24 | watcher-tls-certs-change/1-deploy-with-tlse | EgmGJVm/ieLg8TA2BgNVHREBAf8ELDAqgih3YXRjaGVyLXB1YmxpYy53YXRjaGVy\n \ logger.go:42: 07:00:24 | watcher-tls-certs-change/1-deploy-with-tlse | LWt1dHRsLWRlZmF1bHQuc3ZjMA0GCSqGSIb3DQEBDAUAA4IBgQBmitaOVryjB1Fy\n \ logger.go:42: 07:00:24 | watcher-tls-certs-change/1-deploy-with-tlse | o+uQ7q1CfFEXu4uk57VffzLi6GFAot2u9gqfKm5Bx/JFKe0HvUHxfyum8oGhg8Mm\n \ logger.go:42: 07:00:24 | watcher-tls-certs-change/1-deploy-with-tlse | lF7bosLeepyWZeFYbXLZ3eCDrTyQRPPc8j6O8R9Ygg2wvIhwIrADK1/KTQuPVFW5\n \ logger.go:42: 07:00:24 | watcher-tls-certs-change/1-deploy-with-tlse | DVaC7MERErbmczp6DYiELdgcqlJimK3Vp/11qNtk1Kc7wVLI2Y36CzhwVMXOBW00\n \ logger.go:42: 07:00:24 | watcher-tls-certs-change/1-deploy-with-tlse | XXQxvYoWLH8WjhVMN9L0ZfTLdQIwL4gt0B3R7Uh8HR/qMCwW4RhIByG/el6wH1yh\n \ logger.go:42: 07:00:24 | watcher-tls-certs-change/1-deploy-with-tlse | D5gkY2Jkbf1vvmmw6MBLkx5O8aL145sW8fDKMGrmRj+eO5ALLeYMdmQZXqwZ4URw\n \ logger.go:42: 07:00:24 | watcher-tls-certs-change/1-deploy-with-tlse | X1tyjzN1ntrcBJf6xnuMLfT1v9RPMCT4CPeb23M5lqvwQXdrbiarkLQ9n/mBPUA6\n \ logger.go:42: 07:00:24 | watcher-tls-certs-change/1-deploy-with-tlse | lUYWBkFqYg9XnAgkWTeehQ7+AeOtvhj6gu3st6nEkOjJs9ri3Jris8kGL9Ay5yMy\n \ logger.go:42: 07:00:24 | watcher-tls-certs-change/1-deploy-with-tlse | af6ByIVzqJuVxr9GJiFIvhx+t2NEelQUiFnN1465Q67x/AELrfo=\n \ logger.go:42: 07:00:24 | watcher-tls-certs-change/1-deploy-with-tlse | -----END CERTIFICATE-----'\n logger.go:42: 07:00:24 | watcher-tls-certs-change/1-deploy-with-tlse | ++ oc get -n watcher-kuttl-default secret cert-watcher-public-svc -o 'jsonpath={.data.tls\\.crt}'\n \ logger.go:42: 07:00:24 | watcher-tls-certs-change/1-deploy-with-tlse | ++ base64 --decode\n logger.go:42: 07:00:25 | watcher-tls-certs-change/1-deploy-with-tlse | + public_secret_cert='-----BEGIN CERTIFICATE-----\n logger.go:42: 07:00:25 | watcher-tls-certs-change/1-deploy-with-tlse | MIIDsjCCAhqgAwIBAgIQFVNmGwlW4BZq0NMjqP97cTANBgkqhkiG9w0BAQwFADAY\n \ logger.go:42: 07:00:25 | watcher-tls-certs-change/1-deploy-with-tlse | MRYwFAYDVQQDEw1yb290Y2EtcHVibGljMB4XDTI2MDEyMjA3MDAwN1oXDTMxMDEy\n \ logger.go:42: 07:00:25 | watcher-tls-certs-change/1-deploy-with-tlse | MTA3MDAwN1owADCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAMpNXlNH\n \ logger.go:42: 07:00:25 | watcher-tls-certs-change/1-deploy-with-tlse | +PgDkPuiPtY2Iy+iOVTneCp31gtcOAI8JIp8fgCvk7KwdimyxIUQ2cYxsZPnG0cy\n \ logger.go:42: 07:00:25 | watcher-tls-certs-change/1-deploy-with-tlse | wL4XpOQoqp0ElzP3DPP1FUOAnx7ctKdash5DUzBT+84oA2Hab8yd3D9RWVXgYTYP\n \ logger.go:42: 07:00:25 | watcher-tls-certs-change/1-deploy-with-tlse | tgSvfD60T1dJeiiJa5kBj0Gfj+0gKtYNueYrOo65hlC6Smd2tP/7av+xLtQ2b1JS\n \ logger.go:42: 07:00:25 | watcher-tls-certs-change/1-deploy-with-tlse | tFAWNznSY+2n+8SyIH6jjDb/oBYfANPOIcfU/4X8dTGzLgkpMMXK3KyXfzgD9EW+\n \ logger.go:42: 07:00:25 | watcher-tls-certs-change/1-deploy-with-tlse | HexV+da9g5Ix/I5gxku5bd0DWz/ovW+1o/uYMgB5ss9oqd8YKKSqkptXfAtoGToA\n \ logger.go:42: 07:00:25 | watcher-tls-certs-change/1-deploy-with-tlse | KG8QccenVwR7kEcCAwEAAaOBjzCBjDAOBgNVHQ8BAf8EBAMCBaAwEwYDVR0lBAww\n \ logger.go:42: 07:00:25 | watcher-tls-certs-change/1-deploy-with-tlse | CgYIKwYBBQUHAwEwDAYDVR0TAQH/BAIwADAfBgNVHSMEGDAWgBSy3xzAKuvmrfpX\n \ logger.go:42: 07:00:25 | watcher-tls-certs-change/1-deploy-with-tlse | EgmGJVm/ieLg8TA2BgNVHREBAf8ELDAqgih3YXRjaGVyLXB1YmxpYy53YXRjaGVy\n \ logger.go:42: 07:00:25 | watcher-tls-certs-change/1-deploy-with-tlse | LWt1dHRsLWRlZmF1bHQuc3ZjMA0GCSqGSIb3DQEBDAUAA4IBgQBmitaOVryjB1Fy\n \ logger.go:42: 07:00:25 | watcher-tls-certs-change/1-deploy-with-tlse | o+uQ7q1CfFEXu4uk57VffzLi6GFAot2u9gqfKm5Bx/JFKe0HvUHxfyum8oGhg8Mm\n \ logger.go:42: 07:00:25 | watcher-tls-certs-change/1-deploy-with-tlse | lF7bosLeepyWZeFYbXLZ3eCDrTyQRPPc8j6O8R9Ygg2wvIhwIrADK1/KTQuPVFW5\n \ logger.go:42: 07:00:25 | watcher-tls-certs-change/1-deploy-with-tlse | DVaC7MERErbmczp6DYiELdgcqlJimK3Vp/11qNtk1Kc7wVLI2Y36CzhwVMXOBW00\n \ logger.go:42: 07:00:25 | watcher-tls-certs-change/1-deploy-with-tlse | XXQxvYoWLH8WjhVMN9L0ZfTLdQIwL4gt0B3R7Uh8HR/qMCwW4RhIByG/el6wH1yh\n \ logger.go:42: 07:00:25 | watcher-tls-certs-change/1-deploy-with-tlse | D5gkY2Jkbf1vvmmw6MBLkx5O8aL145sW8fDKMGrmRj+eO5ALLeYMdmQZXqwZ4URw\n \ logger.go:42: 07:00:25 | watcher-tls-certs-change/1-deploy-with-tlse | X1tyjzN1ntrcBJf6xnuMLfT1v9RPMCT4CPeb23M5lqvwQXdrbiarkLQ9n/mBPUA6\n \ logger.go:42: 07:00:25 | watcher-tls-certs-change/1-deploy-with-tlse | lUYWBkFqYg9XnAgkWTeehQ7+AeOtvhj6gu3st6nEkOjJs9ri3Jris8kGL9Ay5yMy\n \ logger.go:42: 07:00:25 | watcher-tls-certs-change/1-deploy-with-tlse | af6ByIVzqJuVxr9GJiFIvhx+t2NEelQUiFnN1465Q67x/AELrfo=\n \ logger.go:42: 07:00:25 | watcher-tls-certs-change/1-deploy-with-tlse | -----END CERTIFICATE-----'\n logger.go:42: 07:00:25 | watcher-tls-certs-change/1-deploy-with-tlse | + '[' '-----BEGIN CERTIFICATE-----\n logger.go:42: 07:00:25 | watcher-tls-certs-change/1-deploy-with-tlse | MIIDsjCCAhqgAwIBAgIQFVNmGwlW4BZq0NMjqP97cTANBgkqhkiG9w0BAQwFADAY\n logger.go:42: 07:00:25 | watcher-tls-certs-change/1-deploy-with-tlse | MRYwFAYDVQQDEw1yb290Y2EtcHVibGljMB4XDTI2MDEyMjA3MDAwN1oXDTMxMDEy\n \ logger.go:42: 07:00:25 | watcher-tls-certs-change/1-deploy-with-tlse | MTA3MDAwN1owADCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAMpNXlNH\n \ logger.go:42: 07:00:25 | watcher-tls-certs-change/1-deploy-with-tlse | +PgDkPuiPtY2Iy+iOVTneCp31gtcOAI8JIp8fgCvk7KwdimyxIUQ2cYxsZPnG0cy\n \ logger.go:42: 07:00:25 | watcher-tls-certs-change/1-deploy-with-tlse | wL4XpOQoqp0ElzP3DPP1FUOAnx7ctKdash5DUzBT+84oA2Hab8yd3D9RWVXgYTYP\n \ logger.go:42: 07:00:25 | watcher-tls-certs-change/1-deploy-with-tlse | tgSvfD60T1dJeiiJa5kBj0Gfj+0gKtYNueYrOo65hlC6Smd2tP/7av+xLtQ2b1JS\n \ logger.go:42: 07:00:25 | watcher-tls-certs-change/1-deploy-with-tlse | tFAWNznSY+2n+8SyIH6jjDb/oBYfANPOIcfU/4X8dTGzLgkpMMXK3KyXfzgD9EW+\n \ logger.go:42: 07:00:25 | watcher-tls-certs-change/1-deploy-with-tlse | HexV+da9g5Ix/I5gxku5bd0DWz/ovW+1o/uYMgB5ss9oqd8YKKSqkptXfAtoGToA\n \ logger.go:42: 07:00:25 | watcher-tls-certs-change/1-deploy-with-tlse | KG8QccenVwR7kEcCAwEAAaOBjzCBjDAOBgNVHQ8BAf8EBAMCBaAwEwYDVR0lBAww\n \ logger.go:42: 07:00:25 | watcher-tls-certs-change/1-deploy-with-tlse | CgYIKwYBBQUHAwEwDAYDVR0TAQH/BAIwADAfBgNVHSMEGDAWgBSy3xzAKuvmrfpX\n \ logger.go:42: 07:00:25 | watcher-tls-certs-change/1-deploy-with-tlse | EgmGJVm/ieLg8TA2BgNVHREBAf8ELDAqgih3YXRjaGVyLXB1YmxpYy53YXRjaGVy\n \ logger.go:42: 07:00:25 | watcher-tls-certs-change/1-deploy-with-tlse | LWt1dHRsLWRlZmF1bHQuc3ZjMA0GCSqGSIb3DQEBDAUAA4IBgQBmitaOVryjB1Fy\n \ logger.go:42: 07:00:25 | watcher-tls-certs-change/1-deploy-with-tlse | o+uQ7q1CfFEXu4uk57VffzLi6GFAot2u9gqfKm5Bx/JFKe0HvUHxfyum8oGhg8Mm\n \ logger.go:42: 07:00:25 | watcher-tls-certs-change/1-deploy-with-tlse | lF7bosLeepyWZeFYbXLZ3eCDrTyQRPPc8j6O8R9Ygg2wvIhwIrADK1/KTQuPVFW5\n \ logger.go:42: 07:00:25 | watcher-tls-certs-change/1-deploy-with-tlse | DVaC7MERErbmczp6DYiELdgcqlJimK3Vp/11qNtk1Kc7wVLI2Y36CzhwVMXOBW00\n \ logger.go:42: 07:00:25 | watcher-tls-certs-change/1-deploy-with-tlse | XXQxvYoWLH8WjhVMN9L0ZfTLdQIwL4gt0B3R7Uh8HR/qMCwW4RhIByG/el6wH1yh\n \ logger.go:42: 07:00:25 | watcher-tls-certs-change/1-deploy-with-tlse | D5gkY2Jkbf1vvmmw6MBLkx5O8aL145sW8fDKMGrmRj+eO5ALLeYMdmQZXqwZ4URw\n \ logger.go:42: 07:00:25 | watcher-tls-certs-change/1-deploy-with-tlse | X1tyjzN1ntrcBJf6xnuMLfT1v9RPMCT4CPeb23M5lqvwQXdrbiarkLQ9n/mBPUA6\n \ logger.go:42: 07:00:25 | watcher-tls-certs-change/1-deploy-with-tlse | lUYWBkFqYg9XnAgkWTeehQ7+AeOtvhj6gu3st6nEkOjJs9ri3Jris8kGL9Ay5yMy\n \ logger.go:42: 07:00:25 | watcher-tls-certs-change/1-deploy-with-tlse | af6ByIVzqJuVxr9GJiFIvhx+t2NEelQUiFnN1465Q67x/AELrfo=\n \ logger.go:42: 07:00:25 | watcher-tls-certs-change/1-deploy-with-tlse | -----END CERTIFICATE-----' '!=' '-----BEGIN CERTIFICATE-----\n logger.go:42: 07:00:25 | watcher-tls-certs-change/1-deploy-with-tlse | MIIDsjCCAhqgAwIBAgIQFVNmGwlW4BZq0NMjqP97cTANBgkqhkiG9w0BAQwFADAY\n \ logger.go:42: 07:00:25 | watcher-tls-certs-change/1-deploy-with-tlse | MRYwFAYDVQQDEw1yb290Y2EtcHVibGljMB4XDTI2MDEyMjA3MDAwN1oXDTMxMDEy\n \ logger.go:42: 07:00:25 | watcher-tls-certs-change/1-deploy-with-tlse | MTA3MDAwN1owADCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAMpNXlNH\n \ logger.go:42: 07:00:25 | watcher-tls-certs-change/1-deploy-with-tlse | +PgDkPuiPtY2Iy+iOVTneCp31gtcOAI8JIp8fgCvk7KwdimyxIUQ2cYxsZPnG0cy\n \ logger.go:42: 07:00:25 | watcher-tls-certs-change/1-deploy-with-tlse | wL4XpOQoqp0ElzP3DPP1FUOAnx7ctKdash5DUzBT+84oA2Hab8yd3D9RWVXgYTYP\n \ logger.go:42: 07:00:25 | watcher-tls-certs-change/1-deploy-with-tlse | tgSvfD60T1dJeiiJa5kBj0Gfj+0gKtYNueYrOo65hlC6Smd2tP/7av+xLtQ2b1JS\n \ logger.go:42: 07:00:25 | watcher-tls-certs-change/1-deploy-with-tlse | tFAWNznSY+2n+8SyIH6jjDb/oBYfANPOIcfU/4X8dTGzLgkpMMXK3KyXfzgD9EW+\n \ logger.go:42: 07:00:25 | watcher-tls-certs-change/1-deploy-with-tlse | HexV+da9g5Ix/I5gxku5bd0DWz/ovW+1o/uYMgB5ss9oqd8YKKSqkptXfAtoGToA\n \ logger.go:42: 07:00:25 | watcher-tls-certs-change/1-deploy-with-tlse | KG8QccenVwR7kEcCAwEAAaOBjzCBjDAOBgNVHQ8BAf8EBAMCBaAwEwYDVR0lBAww\n \ logger.go:42: 07:00:25 | watcher-tls-certs-change/1-deploy-with-tlse | CgYIKwYBBQUHAwEwDAYDVR0TAQH/BAIwADAfBgNVHSMEGDAWgBSy3xzAKuvmrfpX\n \ logger.go:42: 07:00:25 | watcher-tls-certs-change/1-deploy-with-tlse | EgmGJVm/ieLg8TA2BgNVHREBAf8ELDAqgih3YXRjaGVyLXB1YmxpYy53YXRjaGVy\n \ logger.go:42: 07:00:25 | watcher-tls-certs-change/1-deploy-with-tlse | LWt1dHRsLWRlZmF1bHQuc3ZjMA0GCSqGSIb3DQEBDAUAA4IBgQBmitaOVryjB1Fy\n \ logger.go:42: 07:00:25 | watcher-tls-certs-change/1-deploy-with-tlse | o+uQ7q1CfFEXu4uk57VffzLi6GFAot2u9gqfKm5Bx/JFKe0HvUHxfyum8oGhg8Mm\n \ logger.go:42: 07:00:25 | watcher-tls-certs-change/1-deploy-with-tlse | lF7bosLeepyWZeFYbXLZ3eCDrTyQRPPc8j6O8R9Ygg2wvIhwIrADK1/KTQuPVFW5\n \ logger.go:42: 07:00:25 | watcher-tls-certs-change/1-deploy-with-tlse | DVaC7MERErbmczp6DYiELdgcqlJimK3Vp/11qNtk1Kc7wVLI2Y36CzhwVMXOBW00\n \ logger.go:42: 07:00:25 | watcher-tls-certs-change/1-deploy-with-tlse | XXQxvYoWLH8WjhVMN9L0ZfTLdQIwL4gt0B3R7Uh8HR/qMCwW4RhIByG/el6wH1yh\n \ logger.go:42: 07:00:25 | watcher-tls-certs-change/1-deploy-with-tlse | D5gkY2Jkbf1vvmmw6MBLkx5O8aL145sW8fDKMGrmRj+eO5ALLeYMdmQZXqwZ4URw\n \ logger.go:42: 07:00:25 | watcher-tls-certs-change/1-deploy-with-tlse | X1tyjzN1ntrcBJf6xnuMLfT1v9RPMCT4CPeb23M5lqvwQXdrbiarkLQ9n/mBPUA6\n \ logger.go:42: 07:00:25 | watcher-tls-certs-change/1-deploy-with-tlse | lUYWBkFqYg9XnAgkWTeehQ7+AeOtvhj6gu3st6nEkOjJs9ri3Jris8kGL9Ay5yMy\n \ logger.go:42: 07:00:25 | watcher-tls-certs-change/1-deploy-with-tlse | af6ByIVzqJuVxr9GJiFIvhx+t2NEelQUiFnN1465Q67x/AELrfo=\n \ logger.go:42: 07:00:25 | watcher-tls-certs-change/1-deploy-with-tlse | -----END CERTIFICATE-----' ']'\n logger.go:42: 07:00:25 | watcher-tls-certs-change/1-deploy-with-tlse | ++ oc rsh -n watcher-kuttl-default -c watcher-api watcher-kuttl-api-0 cat /etc/pki/tls/certs/internal.crt\n \ logger.go:42: 07:00:25 | watcher-tls-certs-change/1-deploy-with-tlse | + internal_svc_cert='-----BEGIN CERTIFICATE-----\n logger.go:42: 07:00:25 | watcher-tls-certs-change/1-deploy-with-tlse | MIIDtzCCAh+gAwIBAgIRAMdtA66/UVErgQ6jB2Mo7g8wDQYJKoZIhvcNAQEMBQAw\n logger.go:42: 07:00:25 | watcher-tls-certs-change/1-deploy-with-tlse | GjEYMBYGA1UEAxMPcm9vdGNhLWludGVybmFsMB4XDTI2MDEyMjA3MDAwN1oXDTMx\n \ logger.go:42: 07:00:25 | watcher-tls-certs-change/1-deploy-with-tlse | MDEyMTA3MDAwN1owADCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAOzY\n \ logger.go:42: 07:00:25 | watcher-tls-certs-change/1-deploy-with-tlse | hUVeRLlVB60sJbdpaQAJI2vpbeEEDa91MsWoYlWF58nvtb17k8KGvpPNemDF4UvD\n \ logger.go:42: 07:00:25 | watcher-tls-certs-change/1-deploy-with-tlse | kRm+oSlbUhWwlbwI1Q3VFz4JhvU12PvdUhV4e5PbNGULUDGpAbr04w098HEQSUj+\n \ logger.go:42: 07:00:25 | watcher-tls-certs-change/1-deploy-with-tlse | D/44rQFA0PbvLa//wISJcF6NG2kZiiAFAlrfYJH5tJM2nbdDOvHRY2FScNwLw3aE\n \ logger.go:42: 07:00:25 | watcher-tls-certs-change/1-deploy-with-tlse | 1UGIp9tmMA1on0l+9tBcndNzSq8klq5TdwDo8Pwrqz8m37ZXvwD1gizzDXwZhiPN\n \ logger.go:42: 07:00:25 | watcher-tls-certs-change/1-deploy-with-tlse | KIvLKp0hAxvswvj5Rib80zGQX6JjJEcOBFZ4GT/e8WUjrsJ6JsTygAtF9Y8xwchO\n \ logger.go:42: 07:00:25 | watcher-tls-certs-change/1-deploy-with-tlse | Ce4xdKeeGZ8hDK2ojFcCAwEAAaOBkTCBjjAOBgNVHQ8BAf8EBAMCBaAwEwYDVR0l\n \ logger.go:42: 07:00:25 | watcher-tls-certs-change/1-deploy-with-tlse | BAwwCgYIKwYBBQUHAwEwDAYDVR0TAQH/BAIwADAfBgNVHSMEGDAWgBSEvgxWYqXo\n \ logger.go:42: 07:00:25 | watcher-tls-certs-change/1-deploy-with-tlse | VaasRUQLJzZAsO/HJTA4BgNVHREBAf8ELjAsgip3YXRjaGVyLWludGVybmFsLndh\n \ logger.go:42: 07:00:25 | watcher-tls-certs-change/1-deploy-with-tlse | dGNoZXIta3V0dGwtZGVmYXVsdC5zdmMwDQYJKoZIhvcNAQEMBQADggGBAJzAl8kI\n \ logger.go:42: 07:00:25 | watcher-tls-certs-change/1-deploy-with-tlse | fRCmnFBnPIq9+3XfefG0pybCCtUusBuoATJYDHmH/379Rb4zeN6bqXg7PP36qVao\n \ logger.go:42: 07:00:25 | watcher-tls-certs-change/1-deploy-with-tlse | 98lAd5LFEieOBPJq2+QXo8CWrLcbItzbOHdFeVStfpuQqaD2OMYLLepP/IpAKT+0\n \ logger.go:42: 07:00:25 | watcher-tls-certs-change/1-deploy-with-tlse | +GEHqjJsaHF3xj6g5HcvQzhSaQh3Pg8ffKmqJKQg95PF1BPQ110MawKYh9MBibPr\n \ logger.go:42: 07:00:25 | watcher-tls-certs-change/1-deploy-with-tlse | b0myrALIel2CeIeWqm+VMrMI8en/gFxJWTfg58GcQbRiUDzWa2BHE1mEoXmOLNDt\n \ logger.go:42: 07:00:25 | watcher-tls-certs-change/1-deploy-with-tlse | L1iYWnFugdfAycs2+uAkPMbpav3Oi3WJX1N1iKRklNv5Cq8oQ++b8aoR2RZPnJJE\n \ logger.go:42: 07:00:25 | watcher-tls-certs-change/1-deploy-with-tlse | ppmkEBU4Kt94NO5B/A2PnjkM8+yOQjhjVnyD5Ccp8kHx0MN7H1OW9J0m4JEPjP/c\n \ logger.go:42: 07:00:25 | watcher-tls-certs-change/1-deploy-with-tlse | uRTxWb4yoJl6Igdfy3tYgldteKztKgf4tDlkR/Htsf8GVL+zbsnTseYY5BakyygP\n \ logger.go:42: 07:00:25 | watcher-tls-certs-change/1-deploy-with-tlse | AHnxS0D1QCA5Zv5craVd+a2ZCNvbnSdl4pwShL796A0cNZGnAKDD1LwcYQ==\n \ logger.go:42: 07:00:25 | watcher-tls-certs-change/1-deploy-with-tlse | -----END CERTIFICATE-----'\n logger.go:42: 07:00:25 | watcher-tls-certs-change/1-deploy-with-tlse | ++ oc get -n watcher-kuttl-default secret cert-watcher-internal-svc -o 'jsonpath={.data.tls\\.crt}'\n \ logger.go:42: 07:00:25 | watcher-tls-certs-change/1-deploy-with-tlse | ++ base64 --decode\n logger.go:42: 07:00:25 | watcher-tls-certs-change/1-deploy-with-tlse | + internal_secret_cert='-----BEGIN CERTIFICATE-----\n logger.go:42: 07:00:25 | watcher-tls-certs-change/1-deploy-with-tlse | MIIDtzCCAh+gAwIBAgIRAMdtA66/UVErgQ6jB2Mo7g8wDQYJKoZIhvcNAQEMBQAw\n \ logger.go:42: 07:00:25 | watcher-tls-certs-change/1-deploy-with-tlse | GjEYMBYGA1UEAxMPcm9vdGNhLWludGVybmFsMB4XDTI2MDEyMjA3MDAwN1oXDTMx\n \ logger.go:42: 07:00:25 | watcher-tls-certs-change/1-deploy-with-tlse | MDEyMTA3MDAwN1owADCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAOzY\n \ logger.go:42: 07:00:25 | watcher-tls-certs-change/1-deploy-with-tlse | hUVeRLlVB60sJbdpaQAJI2vpbeEEDa91MsWoYlWF58nvtb17k8KGvpPNemDF4UvD\n \ logger.go:42: 07:00:25 | watcher-tls-certs-change/1-deploy-with-tlse | kRm+oSlbUhWwlbwI1Q3VFz4JhvU12PvdUhV4e5PbNGULUDGpAbr04w098HEQSUj+\n \ logger.go:42: 07:00:25 | watcher-tls-certs-change/1-deploy-with-tlse | D/44rQFA0PbvLa//wISJcF6NG2kZiiAFAlrfYJH5tJM2nbdDOvHRY2FScNwLw3aE\n \ logger.go:42: 07:00:25 | watcher-tls-certs-change/1-deploy-with-tlse | 1UGIp9tmMA1on0l+9tBcndNzSq8klq5TdwDo8Pwrqz8m37ZXvwD1gizzDXwZhiPN\n \ logger.go:42: 07:00:25 | watcher-tls-certs-change/1-deploy-with-tlse | KIvLKp0hAxvswvj5Rib80zGQX6JjJEcOBFZ4GT/e8WUjrsJ6JsTygAtF9Y8xwchO\n \ logger.go:42: 07:00:25 | watcher-tls-certs-change/1-deploy-with-tlse | Ce4xdKeeGZ8hDK2ojFcCAwEAAaOBkTCBjjAOBgNVHQ8BAf8EBAMCBaAwEwYDVR0l\n \ logger.go:42: 07:00:25 | watcher-tls-certs-change/1-deploy-with-tlse | BAwwCgYIKwYBBQUHAwEwDAYDVR0TAQH/BAIwADAfBgNVHSMEGDAWgBSEvgxWYqXo\n \ logger.go:42: 07:00:25 | watcher-tls-certs-change/1-deploy-with-tlse | VaasRUQLJzZAsO/HJTA4BgNVHREBAf8ELjAsgip3YXRjaGVyLWludGVybmFsLndh\n \ logger.go:42: 07:00:25 | watcher-tls-certs-change/1-deploy-with-tlse | dGNoZXIta3V0dGwtZGVmYXVsdC5zdmMwDQYJKoZIhvcNAQEMBQADggGBAJzAl8kI\n \ logger.go:42: 07:00:25 | watcher-tls-certs-change/1-deploy-with-tlse | fRCmnFBnPIq9+3XfefG0pybCCtUusBuoATJYDHmH/379Rb4zeN6bqXg7PP36qVao\n \ logger.go:42: 07:00:25 | watcher-tls-certs-change/1-deploy-with-tlse | 98lAd5LFEieOBPJq2+QXo8CWrLcbItzbOHdFeVStfpuQqaD2OMYLLepP/IpAKT+0\n \ logger.go:42: 07:00:25 | watcher-tls-certs-change/1-deploy-with-tlse | +GEHqjJsaHF3xj6g5HcvQzhSaQh3Pg8ffKmqJKQg95PF1BPQ110MawKYh9MBibPr\n \ logger.go:42: 07:00:25 | watcher-tls-certs-change/1-deploy-with-tlse | b0myrALIel2CeIeWqm+VMrMI8en/gFxJWTfg58GcQbRiUDzWa2BHE1mEoXmOLNDt\n \ logger.go:42: 07:00:25 | watcher-tls-certs-change/1-deploy-with-tlse | L1iYWnFugdfAycs2+uAkPMbpav3Oi3WJX1N1iKRklNv5Cq8oQ++b8aoR2RZPnJJE\n \ logger.go:42: 07:00:25 | watcher-tls-certs-change/1-deploy-with-tlse | ppmkEBU4Kt94NO5B/A2PnjkM8+yOQjhjVnyD5Ccp8kHx0MN7H1OW9J0m4JEPjP/c\n \ logger.go:42: 07:00:25 | watcher-tls-certs-change/1-deploy-with-tlse | uRTxWb4yoJl6Igdfy3tYgldteKztKgf4tDlkR/Htsf8GVL+zbsnTseYY5BakyygP\n \ logger.go:42: 07:00:25 | watcher-tls-certs-change/1-deploy-with-tlse | AHnxS0D1QCA5Zv5craVd+a2ZCNvbnSdl4pwShL796A0cNZGnAKDD1LwcYQ==\n \ logger.go:42: 07:00:25 | watcher-tls-certs-change/1-deploy-with-tlse | -----END CERTIFICATE-----'\n logger.go:42: 07:00:25 | watcher-tls-certs-change/1-deploy-with-tlse | + '[' '-----BEGIN CERTIFICATE-----\n logger.go:42: 07:00:25 | watcher-tls-certs-change/1-deploy-with-tlse | MIIDtzCCAh+gAwIBAgIRAMdtA66/UVErgQ6jB2Mo7g8wDQYJKoZIhvcNAQEMBQAw\n logger.go:42: 07:00:25 | watcher-tls-certs-change/1-deploy-with-tlse | GjEYMBYGA1UEAxMPcm9vdGNhLWludGVybmFsMB4XDTI2MDEyMjA3MDAwN1oXDTMx\n \ logger.go:42: 07:00:25 | watcher-tls-certs-change/1-deploy-with-tlse | MDEyMTA3MDAwN1owADCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAOzY\n \ logger.go:42: 07:00:25 | watcher-tls-certs-change/1-deploy-with-tlse | hUVeRLlVB60sJbdpaQAJI2vpbeEEDa91MsWoYlWF58nvtb17k8KGvpPNemDF4UvD\n \ logger.go:42: 07:00:25 | watcher-tls-certs-change/1-deploy-with-tlse | kRm+oSlbUhWwlbwI1Q3VFz4JhvU12PvdUhV4e5PbNGULUDGpAbr04w098HEQSUj+\n \ logger.go:42: 07:00:25 | watcher-tls-certs-change/1-deploy-with-tlse | D/44rQFA0PbvLa//wISJcF6NG2kZiiAFAlrfYJH5tJM2nbdDOvHRY2FScNwLw3aE\n \ logger.go:42: 07:00:25 | watcher-tls-certs-change/1-deploy-with-tlse | 1UGIp9tmMA1on0l+9tBcndNzSq8klq5TdwDo8Pwrqz8m37ZXvwD1gizzDXwZhiPN\n \ logger.go:42: 07:00:25 | watcher-tls-certs-change/1-deploy-with-tlse | KIvLKp0hAxvswvj5Rib80zGQX6JjJEcOBFZ4GT/e8WUjrsJ6JsTygAtF9Y8xwchO\n \ logger.go:42: 07:00:25 | watcher-tls-certs-change/1-deploy-with-tlse | Ce4xdKeeGZ8hDK2ojFcCAwEAAaOBkTCBjjAOBgNVHQ8BAf8EBAMCBaAwEwYDVR0l\n \ logger.go:42: 07:00:25 | watcher-tls-certs-change/1-deploy-with-tlse | BAwwCgYIKwYBBQUHAwEwDAYDVR0TAQH/BAIwADAfBgNVHSMEGDAWgBSEvgxWYqXo\n \ logger.go:42: 07:00:25 | watcher-tls-certs-change/1-deploy-with-tlse | VaasRUQLJzZAsO/HJTA4BgNVHREBAf8ELjAsgip3YXRjaGVyLWludGVybmFsLndh\n \ logger.go:42: 07:00:25 | watcher-tls-certs-change/1-deploy-with-tlse | dGNoZXIta3V0dGwtZGVmYXVsdC5zdmMwDQYJKoZIhvcNAQEMBQADggGBAJzAl8kI\n \ logger.go:42: 07:00:25 | watcher-tls-certs-change/1-deploy-with-tlse | fRCmnFBnPIq9+3XfefG0pybCCtUusBuoATJYDHmH/379Rb4zeN6bqXg7PP36qVao\n \ logger.go:42: 07:00:25 | watcher-tls-certs-change/1-deploy-with-tlse | 98lAd5LFEieOBPJq2+QXo8CWrLcbItzbOHdFeVStfpuQqaD2OMYLLepP/IpAKT+0\n \ logger.go:42: 07:00:25 | watcher-tls-certs-change/1-deploy-with-tlse | +GEHqjJsaHF3xj6g5HcvQzhSaQh3Pg8ffKmqJKQg95PF1BPQ110MawKYh9MBibPr\n \ logger.go:42: 07:00:25 | watcher-tls-certs-change/1-deploy-with-tlse | b0myrALIel2CeIeWqm+VMrMI8en/gFxJWTfg58GcQbRiUDzWa2BHE1mEoXmOLNDt\n \ logger.go:42: 07:00:25 | watcher-tls-certs-change/1-deploy-with-tlse | L1iYWnFugdfAycs2+uAkPMbpav3Oi3WJX1N1iKRklNv5Cq8oQ++b8aoR2RZPnJJE\n \ logger.go:42: 07:00:25 | watcher-tls-certs-change/1-deploy-with-tlse | ppmkEBU4Kt94NO5B/A2PnjkM8+yOQjhjVnyD5Ccp8kHx0MN7H1OW9J0m4JEPjP/c\n \ logger.go:42: 07:00:25 | watcher-tls-certs-change/1-deploy-with-tlse | uRTxWb4yoJl6Igdfy3tYgldteKztKgf4tDlkR/Htsf8GVL+zbsnTseYY5BakyygP\n \ logger.go:42: 07:00:25 | watcher-tls-certs-change/1-deploy-with-tlse | AHnxS0D1QCA5Zv5craVd+a2ZCNvbnSdl4pwShL796A0cNZGnAKDD1LwcYQ==\n \ logger.go:42: 07:00:25 | watcher-tls-certs-change/1-deploy-with-tlse | -----END CERTIFICATE-----' '!=' '-----BEGIN CERTIFICATE-----\n logger.go:42: 07:00:25 | watcher-tls-certs-change/1-deploy-with-tlse | MIIDtzCCAh+gAwIBAgIRAMdtA66/UVErgQ6jB2Mo7g8wDQYJKoZIhvcNAQEMBQAw\n \ logger.go:42: 07:00:25 | watcher-tls-certs-change/1-deploy-with-tlse | GjEYMBYGA1UEAxMPcm9vdGNhLWludGVybmFsMB4XDTI2MDEyMjA3MDAwN1oXDTMx\n \ logger.go:42: 07:00:25 | watcher-tls-certs-change/1-deploy-with-tlse | MDEyMTA3MDAwN1owADCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAOzY\n \ logger.go:42: 07:00:25 | watcher-tls-certs-change/1-deploy-with-tlse | hUVeRLlVB60sJbdpaQAJI2vpbeEEDa91MsWoYlWF58nvtb17k8KGvpPNemDF4UvD\n \ logger.go:42: 07:00:25 | watcher-tls-certs-change/1-deploy-with-tlse | kRm+oSlbUhWwlbwI1Q3VFz4JhvU12PvdUhV4e5PbNGULUDGpAbr04w098HEQSUj+\n \ logger.go:42: 07:00:25 | watcher-tls-certs-change/1-deploy-with-tlse | D/44rQFA0PbvLa//wISJcF6NG2kZiiAFAlrfYJH5tJM2nbdDOvHRY2FScNwLw3aE\n \ logger.go:42: 07:00:25 | watcher-tls-certs-change/1-deploy-with-tlse | 1UGIp9tmMA1on0l+9tBcndNzSq8klq5TdwDo8Pwrqz8m37ZXvwD1gizzDXwZhiPN\n \ logger.go:42: 07:00:25 | watcher-tls-certs-change/1-deploy-with-tlse | KIvLKp0hAxvswvj5Rib80zGQX6JjJEcOBFZ4GT/e8WUjrsJ6JsTygAtF9Y8xwchO\n \ logger.go:42: 07:00:25 | watcher-tls-certs-change/1-deploy-with-tlse | Ce4xdKeeGZ8hDK2ojFcCAwEAAaOBkTCBjjAOBgNVHQ8BAf8EBAMCBaAwEwYDVR0l\n \ logger.go:42: 07:00:25 | watcher-tls-certs-change/1-deploy-with-tlse | BAwwCgYIKwYBBQUHAwEwDAYDVR0TAQH/BAIwADAfBgNVHSMEGDAWgBSEvgxWYqXo\n \ logger.go:42: 07:00:25 | watcher-tls-certs-change/1-deploy-with-tlse | VaasRUQLJzZAsO/HJTA4BgNVHREBAf8ELjAsgip3YXRjaGVyLWludGVybmFsLndh\n \ logger.go:42: 07:00:25 | watcher-tls-certs-change/1-deploy-with-tlse | dGNoZXIta3V0dGwtZGVmYXVsdC5zdmMwDQYJKoZIhvcNAQEMBQADggGBAJzAl8kI\n \ logger.go:42: 07:00:25 | watcher-tls-certs-change/1-deploy-with-tlse | fRCmnFBnPIq9+3XfefG0pybCCtUusBuoATJYDHmH/379Rb4zeN6bqXg7PP36qVao\n \ logger.go:42: 07:00:25 | watcher-tls-certs-change/1-deploy-with-tlse | 98lAd5LFEieOBPJq2+QXo8CWrLcbItzbOHdFeVStfpuQqaD2OMYLLepP/IpAKT+0\n \ logger.go:42: 07:00:25 | watcher-tls-certs-change/1-deploy-with-tlse | +GEHqjJsaHF3xj6g5HcvQzhSaQh3Pg8ffKmqJKQg95PF1BPQ110MawKYh9MBibPr\n \ logger.go:42: 07:00:25 | watcher-tls-certs-change/1-deploy-with-tlse | b0myrALIel2CeIeWqm+VMrMI8en/gFxJWTfg58GcQbRiUDzWa2BHE1mEoXmOLNDt\n \ logger.go:42: 07:00:25 | watcher-tls-certs-change/1-deploy-with-tlse | L1iYWnFugdfAycs2+uAkPMbpav3Oi3WJX1N1iKRklNv5Cq8oQ++b8aoR2RZPnJJE\n \ logger.go:42: 07:00:25 | watcher-tls-certs-change/1-deploy-with-tlse | ppmkEBU4Kt94NO5B/A2PnjkM8+yOQjhjVnyD5Ccp8kHx0MN7H1OW9J0m4JEPjP/c\n \ logger.go:42: 07:00:25 | watcher-tls-certs-change/1-deploy-with-tlse | uRTxWb4yoJl6Igdfy3tYgldteKztKgf4tDlkR/Htsf8GVL+zbsnTseYY5BakyygP\n \ logger.go:42: 07:00:25 | watcher-tls-certs-change/1-deploy-with-tlse | AHnxS0D1QCA5Zv5craVd+a2ZCNvbnSdl4pwShL796A0cNZGnAKDD1LwcYQ==\n \ logger.go:42: 07:00:25 | watcher-tls-certs-change/1-deploy-with-tlse | -----END CERTIFICATE-----' ']'\n logger.go:42: 07:00:26 | watcher-tls-certs-change/1-deploy-with-tlse | running command: [sh -c set -euxo pipefail\n public_svc_cert=$(oc rsh -n $NAMESPACE -c watcher-api watcher-kuttl-api-0 cat /etc/pki/tls/certs/public.crt)\n \ public_secret_cert=$(oc get -n $NAMESPACE secret cert-watcher-public-svc -o jsonpath='{.data.tls\\.crt}' | base64 --decode)\n # ensure that the svc secret and cert secret match\n if [ \"${public_svc_cert}\" != \"${public_secret_cert}\" ]; then\n exit 1\n fi\n \n internal_svc_cert=$(oc rsh -n $NAMESPACE -c watcher-api watcher-kuttl-api-0 cat /etc/pki/tls/certs/internal.crt)\n \ internal_secret_cert=$(oc get -n $NAMESPACE secret cert-watcher-internal-svc -o jsonpath='{.data.tls\\.crt}' | base64 --decode)\n # ensure that the svc secret and cert secret match\n if [ \"${internal_svc_cert}\" != \"${internal_secret_cert}\" ]; then\n exit 1\n fi\n ]\n logger.go:42: 07:00:26 | watcher-tls-certs-change/1-deploy-with-tlse | ++ oc rsh -n watcher-kuttl-default -c watcher-api watcher-kuttl-api-0 cat /etc/pki/tls/certs/public.crt\n \ logger.go:42: 07:00:26 | watcher-tls-certs-change/1-deploy-with-tlse | + public_svc_cert='-----BEGIN CERTIFICATE-----\n logger.go:42: 07:00:26 | watcher-tls-certs-change/1-deploy-with-tlse | MIIDsjCCAhqgAwIBAgIQFVNmGwlW4BZq0NMjqP97cTANBgkqhkiG9w0BAQwFADAY\n logger.go:42: 07:00:26 | watcher-tls-certs-change/1-deploy-with-tlse | MRYwFAYDVQQDEw1yb290Y2EtcHVibGljMB4XDTI2MDEyMjA3MDAwN1oXDTMxMDEy\n \ logger.go:42: 07:00:26 | watcher-tls-certs-change/1-deploy-with-tlse | MTA3MDAwN1owADCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAMpNXlNH\n \ logger.go:42: 07:00:26 | watcher-tls-certs-change/1-deploy-with-tlse | +PgDkPuiPtY2Iy+iOVTneCp31gtcOAI8JIp8fgCvk7KwdimyxIUQ2cYxsZPnG0cy\n \ logger.go:42: 07:00:26 | watcher-tls-certs-change/1-deploy-with-tlse | wL4XpOQoqp0ElzP3DPP1FUOAnx7ctKdash5DUzBT+84oA2Hab8yd3D9RWVXgYTYP\n \ logger.go:42: 07:00:26 | watcher-tls-certs-change/1-deploy-with-tlse | tgSvfD60T1dJeiiJa5kBj0Gfj+0gKtYNueYrOo65hlC6Smd2tP/7av+xLtQ2b1JS\n \ logger.go:42: 07:00:26 | watcher-tls-certs-change/1-deploy-with-tlse | tFAWNznSY+2n+8SyIH6jjDb/oBYfANPOIcfU/4X8dTGzLgkpMMXK3KyXfzgD9EW+\n \ logger.go:42: 07:00:26 | watcher-tls-certs-change/1-deploy-with-tlse | HexV+da9g5Ix/I5gxku5bd0DWz/ovW+1o/uYMgB5ss9oqd8YKKSqkptXfAtoGToA\n \ logger.go:42: 07:00:26 | watcher-tls-certs-change/1-deploy-with-tlse | KG8QccenVwR7kEcCAwEAAaOBjzCBjDAOBgNVHQ8BAf8EBAMCBaAwEwYDVR0lBAww\n \ logger.go:42: 07:00:26 | watcher-tls-certs-change/1-deploy-with-tlse | CgYIKwYBBQUHAwEwDAYDVR0TAQH/BAIwADAfBgNVHSMEGDAWgBSy3xzAKuvmrfpX\n \ logger.go:42: 07:00:26 | watcher-tls-certs-change/1-deploy-with-tlse | EgmGJVm/ieLg8TA2BgNVHREBAf8ELDAqgih3YXRjaGVyLXB1YmxpYy53YXRjaGVy\n \ logger.go:42: 07:00:26 | watcher-tls-certs-change/1-deploy-with-tlse | LWt1dHRsLWRlZmF1bHQuc3ZjMA0GCSqGSIb3DQEBDAUAA4IBgQBmitaOVryjB1Fy\n \ logger.go:42: 07:00:26 | watcher-tls-certs-change/1-deploy-with-tlse | o+uQ7q1CfFEXu4uk57VffzLi6GFAot2u9gqfKm5Bx/JFKe0HvUHxfyum8oGhg8Mm\n \ logger.go:42: 07:00:26 | watcher-tls-certs-change/1-deploy-with-tlse | lF7bosLeepyWZeFYbXLZ3eCDrTyQRPPc8j6O8R9Ygg2wvIhwIrADK1/KTQuPVFW5\n \ logger.go:42: 07:00:26 | watcher-tls-certs-change/1-deploy-with-tlse | DVaC7MERErbmczp6DYiELdgcqlJimK3Vp/11qNtk1Kc7wVLI2Y36CzhwVMXOBW00\n \ logger.go:42: 07:00:26 | watcher-tls-certs-change/1-deploy-with-tlse | XXQxvYoWLH8WjhVMN9L0ZfTLdQIwL4gt0B3R7Uh8HR/qMCwW4RhIByG/el6wH1yh\n \ logger.go:42: 07:00:26 | watcher-tls-certs-change/1-deploy-with-tlse | D5gkY2Jkbf1vvmmw6MBLkx5O8aL145sW8fDKMGrmRj+eO5ALLeYMdmQZXqwZ4URw\n \ logger.go:42: 07:00:26 | watcher-tls-certs-change/1-deploy-with-tlse | X1tyjzN1ntrcBJf6xnuMLfT1v9RPMCT4CPeb23M5lqvwQXdrbiarkLQ9n/mBPUA6\n \ logger.go:42: 07:00:26 | watcher-tls-certs-change/1-deploy-with-tlse | lUYWBkFqYg9XnAgkWTeehQ7+AeOtvhj6gu3st6nEkOjJs9ri3Jris8kGL9Ay5yMy\n \ logger.go:42: 07:00:26 | watcher-tls-certs-change/1-deploy-with-tlse | af6ByIVzqJuVxr9GJiFIvhx+t2NEelQUiFnN1465Q67x/AELrfo=\n \ logger.go:42: 07:00:26 | watcher-tls-certs-change/1-deploy-with-tlse | -----END CERTIFICATE-----'\n logger.go:42: 07:00:26 | watcher-tls-certs-change/1-deploy-with-tlse | ++ oc get -n watcher-kuttl-default secret cert-watcher-public-svc -o 'jsonpath={.data.tls\\.crt}'\n \ logger.go:42: 07:00:26 | watcher-tls-certs-change/1-deploy-with-tlse | ++ base64 --decode\n logger.go:42: 07:00:27 | watcher-tls-certs-change/1-deploy-with-tlse | + public_secret_cert='-----BEGIN CERTIFICATE-----\n logger.go:42: 07:00:27 | watcher-tls-certs-change/1-deploy-with-tlse | MIIDsjCCAhqgAwIBAgIQFVNmGwlW4BZq0NMjqP97cTANBgkqhkiG9w0BAQwFADAY\n \ logger.go:42: 07:00:27 | watcher-tls-certs-change/1-deploy-with-tlse | MRYwFAYDVQQDEw1yb290Y2EtcHVibGljMB4XDTI2MDEyMjA3MDAwN1oXDTMxMDEy\n \ logger.go:42: 07:00:27 | watcher-tls-certs-change/1-deploy-with-tlse | MTA3MDAwN1owADCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAMpNXlNH\n \ logger.go:42: 07:00:27 | watcher-tls-certs-change/1-deploy-with-tlse | +PgDkPuiPtY2Iy+iOVTneCp31gtcOAI8JIp8fgCvk7KwdimyxIUQ2cYxsZPnG0cy\n \ logger.go:42: 07:00:27 | watcher-tls-certs-change/1-deploy-with-tlse | wL4XpOQoqp0ElzP3DPP1FUOAnx7ctKdash5DUzBT+84oA2Hab8yd3D9RWVXgYTYP\n \ logger.go:42: 07:00:27 | watcher-tls-certs-change/1-deploy-with-tlse | tgSvfD60T1dJeiiJa5kBj0Gfj+0gKtYNueYrOo65hlC6Smd2tP/7av+xLtQ2b1JS\n \ logger.go:42: 07:00:27 | watcher-tls-certs-change/1-deploy-with-tlse | tFAWNznSY+2n+8SyIH6jjDb/oBYfANPOIcfU/4X8dTGzLgkpMMXK3KyXfzgD9EW+\n \ logger.go:42: 07:00:27 | watcher-tls-certs-change/1-deploy-with-tlse | HexV+da9g5Ix/I5gxku5bd0DWz/ovW+1o/uYMgB5ss9oqd8YKKSqkptXfAtoGToA\n \ logger.go:42: 07:00:27 | watcher-tls-certs-change/1-deploy-with-tlse | KG8QccenVwR7kEcCAwEAAaOBjzCBjDAOBgNVHQ8BAf8EBAMCBaAwEwYDVR0lBAww\n \ logger.go:42: 07:00:27 | watcher-tls-certs-change/1-deploy-with-tlse | CgYIKwYBBQUHAwEwDAYDVR0TAQH/BAIwADAfBgNVHSMEGDAWgBSy3xzAKuvmrfpX\n \ logger.go:42: 07:00:27 | watcher-tls-certs-change/1-deploy-with-tlse | EgmGJVm/ieLg8TA2BgNVHREBAf8ELDAqgih3YXRjaGVyLXB1YmxpYy53YXRjaGVy\n \ logger.go:42: 07:00:27 | watcher-tls-certs-change/1-deploy-with-tlse | LWt1dHRsLWRlZmF1bHQuc3ZjMA0GCSqGSIb3DQEBDAUAA4IBgQBmitaOVryjB1Fy\n \ logger.go:42: 07:00:27 | watcher-tls-certs-change/1-deploy-with-tlse | o+uQ7q1CfFEXu4uk57VffzLi6GFAot2u9gqfKm5Bx/JFKe0HvUHxfyum8oGhg8Mm\n \ logger.go:42: 07:00:27 | watcher-tls-certs-change/1-deploy-with-tlse | lF7bosLeepyWZeFYbXLZ3eCDrTyQRPPc8j6O8R9Ygg2wvIhwIrADK1/KTQuPVFW5\n \ logger.go:42: 07:00:27 | watcher-tls-certs-change/1-deploy-with-tlse | DVaC7MERErbmczp6DYiELdgcqlJimK3Vp/11qNtk1Kc7wVLI2Y36CzhwVMXOBW00\n \ logger.go:42: 07:00:27 | watcher-tls-certs-change/1-deploy-with-tlse | XXQxvYoWLH8WjhVMN9L0ZfTLdQIwL4gt0B3R7Uh8HR/qMCwW4RhIByG/el6wH1yh\n \ logger.go:42: 07:00:27 | watcher-tls-certs-change/1-deploy-with-tlse | D5gkY2Jkbf1vvmmw6MBLkx5O8aL145sW8fDKMGrmRj+eO5ALLeYMdmQZXqwZ4URw\n \ logger.go:42: 07:00:27 | watcher-tls-certs-change/1-deploy-with-tlse | X1tyjzN1ntrcBJf6xnuMLfT1v9RPMCT4CPeb23M5lqvwQXdrbiarkLQ9n/mBPUA6\n \ logger.go:42: 07:00:27 | watcher-tls-certs-change/1-deploy-with-tlse | lUYWBkFqYg9XnAgkWTeehQ7+AeOtvhj6gu3st6nEkOjJs9ri3Jris8kGL9Ay5yMy\n \ logger.go:42: 07:00:27 | watcher-tls-certs-change/1-deploy-with-tlse | af6ByIVzqJuVxr9GJiFIvhx+t2NEelQUiFnN1465Q67x/AELrfo=\n \ logger.go:42: 07:00:27 | watcher-tls-certs-change/1-deploy-with-tlse | -----END CERTIFICATE-----'\n logger.go:42: 07:00:27 | watcher-tls-certs-change/1-deploy-with-tlse | + '[' '-----BEGIN CERTIFICATE-----\n logger.go:42: 07:00:27 | watcher-tls-certs-change/1-deploy-with-tlse | MIIDsjCCAhqgAwIBAgIQFVNmGwlW4BZq0NMjqP97cTANBgkqhkiG9w0BAQwFADAY\n logger.go:42: 07:00:27 | watcher-tls-certs-change/1-deploy-with-tlse | MRYwFAYDVQQDEw1yb290Y2EtcHVibGljMB4XDTI2MDEyMjA3MDAwN1oXDTMxMDEy\n \ logger.go:42: 07:00:27 | watcher-tls-certs-change/1-deploy-with-tlse | MTA3MDAwN1owADCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAMpNXlNH\n \ logger.go:42: 07:00:27 | watcher-tls-certs-change/1-deploy-with-tlse | +PgDkPuiPtY2Iy+iOVTneCp31gtcOAI8JIp8fgCvk7KwdimyxIUQ2cYxsZPnG0cy\n \ logger.go:42: 07:00:27 | watcher-tls-certs-change/1-deploy-with-tlse | wL4XpOQoqp0ElzP3DPP1FUOAnx7ctKdash5DUzBT+84oA2Hab8yd3D9RWVXgYTYP\n \ logger.go:42: 07:00:27 | watcher-tls-certs-change/1-deploy-with-tlse | tgSvfD60T1dJeiiJa5kBj0Gfj+0gKtYNueYrOo65hlC6Smd2tP/7av+xLtQ2b1JS\n \ logger.go:42: 07:00:27 | watcher-tls-certs-change/1-deploy-with-tlse | tFAWNznSY+2n+8SyIH6jjDb/oBYfANPOIcfU/4X8dTGzLgkpMMXK3KyXfzgD9EW+\n \ logger.go:42: 07:00:27 | watcher-tls-certs-change/1-deploy-with-tlse | HexV+da9g5Ix/I5gxku5bd0DWz/ovW+1o/uYMgB5ss9oqd8YKKSqkptXfAtoGToA\n \ logger.go:42: 07:00:27 | watcher-tls-certs-change/1-deploy-with-tlse | KG8QccenVwR7kEcCAwEAAaOBjzCBjDAOBgNVHQ8BAf8EBAMCBaAwEwYDVR0lBAww\n \ logger.go:42: 07:00:27 | watcher-tls-certs-change/1-deploy-with-tlse | CgYIKwYBBQUHAwEwDAYDVR0TAQH/BAIwADAfBgNVHSMEGDAWgBSy3xzAKuvmrfpX\n \ logger.go:42: 07:00:27 | watcher-tls-certs-change/1-deploy-with-tlse | EgmGJVm/ieLg8TA2BgNVHREBAf8ELDAqgih3YXRjaGVyLXB1YmxpYy53YXRjaGVy\n \ logger.go:42: 07:00:27 | watcher-tls-certs-change/1-deploy-with-tlse | LWt1dHRsLWRlZmF1bHQuc3ZjMA0GCSqGSIb3DQEBDAUAA4IBgQBmitaOVryjB1Fy\n \ logger.go:42: 07:00:27 | watcher-tls-certs-change/1-deploy-with-tlse | o+uQ7q1CfFEXu4uk57VffzLi6GFAot2u9gqfKm5Bx/JFKe0HvUHxfyum8oGhg8Mm\n \ logger.go:42: 07:00:27 | watcher-tls-certs-change/1-deploy-with-tlse | lF7bosLeepyWZeFYbXLZ3eCDrTyQRPPc8j6O8R9Ygg2wvIhwIrADK1/KTQuPVFW5\n \ logger.go:42: 07:00:27 | watcher-tls-certs-change/1-deploy-with-tlse | DVaC7MERErbmczp6DYiELdgcqlJimK3Vp/11qNtk1Kc7wVLI2Y36CzhwVMXOBW00\n \ logger.go:42: 07:00:27 | watcher-tls-certs-change/1-deploy-with-tlse | XXQxvYoWLH8WjhVMN9L0ZfTLdQIwL4gt0B3R7Uh8HR/qMCwW4RhIByG/el6wH1yh\n \ logger.go:42: 07:00:27 | watcher-tls-certs-change/1-deploy-with-tlse | D5gkY2Jkbf1vvmmw6MBLkx5O8aL145sW8fDKMGrmRj+eO5ALLeYMdmQZXqwZ4URw\n \ logger.go:42: 07:00:27 | watcher-tls-certs-change/1-deploy-with-tlse | X1tyjzN1ntrcBJf6xnuMLfT1v9RPMCT4CPeb23M5lqvwQXdrbiarkLQ9n/mBPUA6\n \ logger.go:42: 07:00:27 | watcher-tls-certs-change/1-deploy-with-tlse | lUYWBkFqYg9XnAgkWTeehQ7+AeOtvhj6gu3st6nEkOjJs9ri3Jris8kGL9Ay5yMy\n \ logger.go:42: 07:00:27 | watcher-tls-certs-change/1-deploy-with-tlse | af6ByIVzqJuVxr9GJiFIvhx+t2NEelQUiFnN1465Q67x/AELrfo=\n \ logger.go:42: 07:00:27 | watcher-tls-certs-change/1-deploy-with-tlse | -----END CERTIFICATE-----' '!=' '-----BEGIN CERTIFICATE-----\n logger.go:42: 07:00:27 | watcher-tls-certs-change/1-deploy-with-tlse | MIIDsjCCAhqgAwIBAgIQFVNmGwlW4BZq0NMjqP97cTANBgkqhkiG9w0BAQwFADAY\n \ logger.go:42: 07:00:27 | watcher-tls-certs-change/1-deploy-with-tlse | MRYwFAYDVQQDEw1yb290Y2EtcHVibGljMB4XDTI2MDEyMjA3MDAwN1oXDTMxMDEy\n \ logger.go:42: 07:00:27 | watcher-tls-certs-change/1-deploy-with-tlse | MTA3MDAwN1owADCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAMpNXlNH\n \ logger.go:42: 07:00:27 | watcher-tls-certs-change/1-deploy-with-tlse | +PgDkPuiPtY2Iy+iOVTneCp31gtcOAI8JIp8fgCvk7KwdimyxIUQ2cYxsZPnG0cy\n \ logger.go:42: 07:00:27 | watcher-tls-certs-change/1-deploy-with-tlse | wL4XpOQoqp0ElzP3DPP1FUOAnx7ctKdash5DUzBT+84oA2Hab8yd3D9RWVXgYTYP\n \ logger.go:42: 07:00:27 | watcher-tls-certs-change/1-deploy-with-tlse | tgSvfD60T1dJeiiJa5kBj0Gfj+0gKtYNueYrOo65hlC6Smd2tP/7av+xLtQ2b1JS\n \ logger.go:42: 07:00:27 | watcher-tls-certs-change/1-deploy-with-tlse | tFAWNznSY+2n+8SyIH6jjDb/oBYfANPOIcfU/4X8dTGzLgkpMMXK3KyXfzgD9EW+\n \ logger.go:42: 07:00:27 | watcher-tls-certs-change/1-deploy-with-tlse | HexV+da9g5Ix/I5gxku5bd0DWz/ovW+1o/uYMgB5ss9oqd8YKKSqkptXfAtoGToA\n \ logger.go:42: 07:00:27 | watcher-tls-certs-change/1-deploy-with-tlse | KG8QccenVwR7kEcCAwEAAaOBjzCBjDAOBgNVHQ8BAf8EBAMCBaAwEwYDVR0lBAww\n \ logger.go:42: 07:00:27 | watcher-tls-certs-change/1-deploy-with-tlse | CgYIKwYBBQUHAwEwDAYDVR0TAQH/BAIwADAfBgNVHSMEGDAWgBSy3xzAKuvmrfpX\n \ logger.go:42: 07:00:27 | watcher-tls-certs-change/1-deploy-with-tlse | EgmGJVm/ieLg8TA2BgNVHREBAf8ELDAqgih3YXRjaGVyLXB1YmxpYy53YXRjaGVy\n \ logger.go:42: 07:00:27 | watcher-tls-certs-change/1-deploy-with-tlse | LWt1dHRsLWRlZmF1bHQuc3ZjMA0GCSqGSIb3DQEBDAUAA4IBgQBmitaOVryjB1Fy\n \ logger.go:42: 07:00:27 | watcher-tls-certs-change/1-deploy-with-tlse | o+uQ7q1CfFEXu4uk57VffzLi6GFAot2u9gqfKm5Bx/JFKe0HvUHxfyum8oGhg8Mm\n \ logger.go:42: 07:00:27 | watcher-tls-certs-change/1-deploy-with-tlse | lF7bosLeepyWZeFYbXLZ3eCDrTyQRPPc8j6O8R9Ygg2wvIhwIrADK1/KTQuPVFW5\n \ logger.go:42: 07:00:27 | watcher-tls-certs-change/1-deploy-with-tlse | DVaC7MERErbmczp6DYiELdgcqlJimK3Vp/11qNtk1Kc7wVLI2Y36CzhwVMXOBW00\n \ logger.go:42: 07:00:27 | watcher-tls-certs-change/1-deploy-with-tlse | XXQxvYoWLH8WjhVMN9L0ZfTLdQIwL4gt0B3R7Uh8HR/qMCwW4RhIByG/el6wH1yh\n \ logger.go:42: 07:00:27 | watcher-tls-certs-change/1-deploy-with-tlse | D5gkY2Jkbf1vvmmw6MBLkx5O8aL145sW8fDKMGrmRj+eO5ALLeYMdmQZXqwZ4URw\n \ logger.go:42: 07:00:27 | watcher-tls-certs-change/1-deploy-with-tlse | X1tyjzN1ntrcBJf6xnuMLfT1v9RPMCT4CPeb23M5lqvwQXdrbiarkLQ9n/mBPUA6\n \ logger.go:42: 07:00:27 | watcher-tls-certs-change/1-deploy-with-tlse | lUYWBkFqYg9XnAgkWTeehQ7+AeOtvhj6gu3st6nEkOjJs9ri3Jris8kGL9Ay5yMy\n \ logger.go:42: 07:00:27 | watcher-tls-certs-change/1-deploy-with-tlse | af6ByIVzqJuVxr9GJiFIvhx+t2NEelQUiFnN1465Q67x/AELrfo=\n \ logger.go:42: 07:00:27 | watcher-tls-certs-change/1-deploy-with-tlse | -----END CERTIFICATE-----' ']'\n logger.go:42: 07:00:27 | watcher-tls-certs-change/1-deploy-with-tlse | ++ oc rsh -n watcher-kuttl-default -c watcher-api watcher-kuttl-api-0 cat /etc/pki/tls/certs/internal.crt\n \ logger.go:42: 07:00:27 | watcher-tls-certs-change/1-deploy-with-tlse | + internal_svc_cert='-----BEGIN CERTIFICATE-----\n logger.go:42: 07:00:27 | watcher-tls-certs-change/1-deploy-with-tlse | MIIDtzCCAh+gAwIBAgIRAMdtA66/UVErgQ6jB2Mo7g8wDQYJKoZIhvcNAQEMBQAw\n logger.go:42: 07:00:27 | watcher-tls-certs-change/1-deploy-with-tlse | GjEYMBYGA1UEAxMPcm9vdGNhLWludGVybmFsMB4XDTI2MDEyMjA3MDAwN1oXDTMx\n \ logger.go:42: 07:00:27 | watcher-tls-certs-change/1-deploy-with-tlse | MDEyMTA3MDAwN1owADCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAOzY\n \ logger.go:42: 07:00:27 | watcher-tls-certs-change/1-deploy-with-tlse | hUVeRLlVB60sJbdpaQAJI2vpbeEEDa91MsWoYlWF58nvtb17k8KGvpPNemDF4UvD\n \ logger.go:42: 07:00:27 | watcher-tls-certs-change/1-deploy-with-tlse | kRm+oSlbUhWwlbwI1Q3VFz4JhvU12PvdUhV4e5PbNGULUDGpAbr04w098HEQSUj+\n \ logger.go:42: 07:00:27 | watcher-tls-certs-change/1-deploy-with-tlse | D/44rQFA0PbvLa//wISJcF6NG2kZiiAFAlrfYJH5tJM2nbdDOvHRY2FScNwLw3aE\n \ logger.go:42: 07:00:27 | watcher-tls-certs-change/1-deploy-with-tlse | 1UGIp9tmMA1on0l+9tBcndNzSq8klq5TdwDo8Pwrqz8m37ZXvwD1gizzDXwZhiPN\n \ logger.go:42: 07:00:27 | watcher-tls-certs-change/1-deploy-with-tlse | KIvLKp0hAxvswvj5Rib80zGQX6JjJEcOBFZ4GT/e8WUjrsJ6JsTygAtF9Y8xwchO\n \ logger.go:42: 07:00:27 | watcher-tls-certs-change/1-deploy-with-tlse | Ce4xdKeeGZ8hDK2ojFcCAwEAAaOBkTCBjjAOBgNVHQ8BAf8EBAMCBaAwEwYDVR0l\n \ logger.go:42: 07:00:27 | watcher-tls-certs-change/1-deploy-with-tlse | BAwwCgYIKwYBBQUHAwEwDAYDVR0TAQH/BAIwADAfBgNVHSMEGDAWgBSEvgxWYqXo\n \ logger.go:42: 07:00:27 | watcher-tls-certs-change/1-deploy-with-tlse | VaasRUQLJzZAsO/HJTA4BgNVHREBAf8ELjAsgip3YXRjaGVyLWludGVybmFsLndh\n \ logger.go:42: 07:00:27 | watcher-tls-certs-change/1-deploy-with-tlse | dGNoZXIta3V0dGwtZGVmYXVsdC5zdmMwDQYJKoZIhvcNAQEMBQADggGBAJzAl8kI\n \ logger.go:42: 07:00:27 | watcher-tls-certs-change/1-deploy-with-tlse | fRCmnFBnPIq9+3XfefG0pybCCtUusBuoATJYDHmH/379Rb4zeN6bqXg7PP36qVao\n \ logger.go:42: 07:00:27 | watcher-tls-certs-change/1-deploy-with-tlse | 98lAd5LFEieOBPJq2+QXo8CWrLcbItzbOHdFeVStfpuQqaD2OMYLLepP/IpAKT+0\n \ logger.go:42: 07:00:27 | watcher-tls-certs-change/1-deploy-with-tlse | +GEHqjJsaHF3xj6g5HcvQzhSaQh3Pg8ffKmqJKQg95PF1BPQ110MawKYh9MBibPr\n \ logger.go:42: 07:00:27 | watcher-tls-certs-change/1-deploy-with-tlse | b0myrALIel2CeIeWqm+VMrMI8en/gFxJWTfg58GcQbRiUDzWa2BHE1mEoXmOLNDt\n \ logger.go:42: 07:00:27 | watcher-tls-certs-change/1-deploy-with-tlse | L1iYWnFugdfAycs2+uAkPMbpav3Oi3WJX1N1iKRklNv5Cq8oQ++b8aoR2RZPnJJE\n \ logger.go:42: 07:00:27 | watcher-tls-certs-change/1-deploy-with-tlse | ppmkEBU4Kt94NO5B/A2PnjkM8+yOQjhjVnyD5Ccp8kHx0MN7H1OW9J0m4JEPjP/c\n \ logger.go:42: 07:00:27 | watcher-tls-certs-change/1-deploy-with-tlse | uRTxWb4yoJl6Igdfy3tYgldteKztKgf4tDlkR/Htsf8GVL+zbsnTseYY5BakyygP\n \ logger.go:42: 07:00:27 | watcher-tls-certs-change/1-deploy-with-tlse | AHnxS0D1QCA5Zv5craVd+a2ZCNvbnSdl4pwShL796A0cNZGnAKDD1LwcYQ==\n \ logger.go:42: 07:00:27 | watcher-tls-certs-change/1-deploy-with-tlse | -----END CERTIFICATE-----'\n logger.go:42: 07:00:27 | watcher-tls-certs-change/1-deploy-with-tlse | ++ oc get -n watcher-kuttl-default secret cert-watcher-internal-svc -o 'jsonpath={.data.tls\\.crt}'\n \ logger.go:42: 07:00:27 | watcher-tls-certs-change/1-deploy-with-tlse | ++ base64 --decode\n logger.go:42: 07:00:27 | watcher-tls-certs-change/1-deploy-with-tlse | + internal_secret_cert='-----BEGIN CERTIFICATE-----\n logger.go:42: 07:00:27 | watcher-tls-certs-change/1-deploy-with-tlse | MIIDtzCCAh+gAwIBAgIRAMdtA66/UVErgQ6jB2Mo7g8wDQYJKoZIhvcNAQEMBQAw\n \ logger.go:42: 07:00:27 | watcher-tls-certs-change/1-deploy-with-tlse | GjEYMBYGA1UEAxMPcm9vdGNhLWludGVybmFsMB4XDTI2MDEyMjA3MDAwN1oXDTMx\n \ logger.go:42: 07:00:27 | watcher-tls-certs-change/1-deploy-with-tlse | MDEyMTA3MDAwN1owADCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAOzY\n \ logger.go:42: 07:00:27 | watcher-tls-certs-change/1-deploy-with-tlse | hUVeRLlVB60sJbdpaQAJI2vpbeEEDa91MsWoYlWF58nvtb17k8KGvpPNemDF4UvD\n \ logger.go:42: 07:00:27 | watcher-tls-certs-change/1-deploy-with-tlse | kRm+oSlbUhWwlbwI1Q3VFz4JhvU12PvdUhV4e5PbNGULUDGpAbr04w098HEQSUj+\n \ logger.go:42: 07:00:27 | watcher-tls-certs-change/1-deploy-with-tlse | D/44rQFA0PbvLa//wISJcF6NG2kZiiAFAlrfYJH5tJM2nbdDOvHRY2FScNwLw3aE\n \ logger.go:42: 07:00:27 | watcher-tls-certs-change/1-deploy-with-tlse | 1UGIp9tmMA1on0l+9tBcndNzSq8klq5TdwDo8Pwrqz8m37ZXvwD1gizzDXwZhiPN\n \ logger.go:42: 07:00:27 | watcher-tls-certs-change/1-deploy-with-tlse | KIvLKp0hAxvswvj5Rib80zGQX6JjJEcOBFZ4GT/e8WUjrsJ6JsTygAtF9Y8xwchO\n \ logger.go:42: 07:00:27 | watcher-tls-certs-change/1-deploy-with-tlse | Ce4xdKeeGZ8hDK2ojFcCAwEAAaOBkTCBjjAOBgNVHQ8BAf8EBAMCBaAwEwYDVR0l\n \ logger.go:42: 07:00:27 | watcher-tls-certs-change/1-deploy-with-tlse | BAwwCgYIKwYBBQUHAwEwDAYDVR0TAQH/BAIwADAfBgNVHSMEGDAWgBSEvgxWYqXo\n \ logger.go:42: 07:00:27 | watcher-tls-certs-change/1-deploy-with-tlse | VaasRUQLJzZAsO/HJTA4BgNVHREBAf8ELjAsgip3YXRjaGVyLWludGVybmFsLndh\n \ logger.go:42: 07:00:27 | watcher-tls-certs-change/1-deploy-with-tlse | dGNoZXIta3V0dGwtZGVmYXVsdC5zdmMwDQYJKoZIhvcNAQEMBQADggGBAJzAl8kI\n \ logger.go:42: 07:00:27 | watcher-tls-certs-change/1-deploy-with-tlse | fRCmnFBnPIq9+3XfefG0pybCCtUusBuoATJYDHmH/379Rb4zeN6bqXg7PP36qVao\n \ logger.go:42: 07:00:27 | watcher-tls-certs-change/1-deploy-with-tlse | 98lAd5LFEieOBPJq2+QXo8CWrLcbItzbOHdFeVStfpuQqaD2OMYLLepP/IpAKT+0\n \ logger.go:42: 07:00:27 | watcher-tls-certs-change/1-deploy-with-tlse | +GEHqjJsaHF3xj6g5HcvQzhSaQh3Pg8ffKmqJKQg95PF1BPQ110MawKYh9MBibPr\n \ logger.go:42: 07:00:27 | watcher-tls-certs-change/1-deploy-with-tlse | b0myrALIel2CeIeWqm+VMrMI8en/gFxJWTfg58GcQbRiUDzWa2BHE1mEoXmOLNDt\n \ logger.go:42: 07:00:27 | watcher-tls-certs-change/1-deploy-with-tlse | L1iYWnFugdfAycs2+uAkPMbpav3Oi3WJX1N1iKRklNv5Cq8oQ++b8aoR2RZPnJJE\n \ logger.go:42: 07:00:27 | watcher-tls-certs-change/1-deploy-with-tlse | ppmkEBU4Kt94NO5B/A2PnjkM8+yOQjhjVnyD5Ccp8kHx0MN7H1OW9J0m4JEPjP/c\n \ logger.go:42: 07:00:27 | watcher-tls-certs-change/1-deploy-with-tlse | uRTxWb4yoJl6Igdfy3tYgldteKztKgf4tDlkR/Htsf8GVL+zbsnTseYY5BakyygP\n \ logger.go:42: 07:00:27 | watcher-tls-certs-change/1-deploy-with-tlse | AHnxS0D1QCA5Zv5craVd+a2ZCNvbnSdl4pwShL796A0cNZGnAKDD1LwcYQ==\n \ logger.go:42: 07:00:27 | watcher-tls-certs-change/1-deploy-with-tlse | -----END CERTIFICATE-----'\n logger.go:42: 07:00:27 | watcher-tls-certs-change/1-deploy-with-tlse | + '[' '-----BEGIN CERTIFICATE-----\n logger.go:42: 07:00:27 | watcher-tls-certs-change/1-deploy-with-tlse | MIIDtzCCAh+gAwIBAgIRAMdtA66/UVErgQ6jB2Mo7g8wDQYJKoZIhvcNAQEMBQAw\n logger.go:42: 07:00:27 | watcher-tls-certs-change/1-deploy-with-tlse | GjEYMBYGA1UEAxMPcm9vdGNhLWludGVybmFsMB4XDTI2MDEyMjA3MDAwN1oXDTMx\n \ logger.go:42: 07:00:27 | watcher-tls-certs-change/1-deploy-with-tlse | MDEyMTA3MDAwN1owADCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAOzY\n \ logger.go:42: 07:00:27 | watcher-tls-certs-change/1-deploy-with-tlse | hUVeRLlVB60sJbdpaQAJI2vpbeEEDa91MsWoYlWF58nvtb17k8KGvpPNemDF4UvD\n \ logger.go:42: 07:00:27 | watcher-tls-certs-change/1-deploy-with-tlse | kRm+oSlbUhWwlbwI1Q3VFz4JhvU12PvdUhV4e5PbNGULUDGpAbr04w098HEQSUj+\n \ logger.go:42: 07:00:27 | watcher-tls-certs-change/1-deploy-with-tlse | D/44rQFA0PbvLa//wISJcF6NG2kZiiAFAlrfYJH5tJM2nbdDOvHRY2FScNwLw3aE\n \ logger.go:42: 07:00:27 | watcher-tls-certs-change/1-deploy-with-tlse | 1UGIp9tmMA1on0l+9tBcndNzSq8klq5TdwDo8Pwrqz8m37ZXvwD1gizzDXwZhiPN\n \ logger.go:42: 07:00:27 | watcher-tls-certs-change/1-deploy-with-tlse | KIvLKp0hAxvswvj5Rib80zGQX6JjJEcOBFZ4GT/e8WUjrsJ6JsTygAtF9Y8xwchO\n \ logger.go:42: 07:00:27 | watcher-tls-certs-change/1-deploy-with-tlse | Ce4xdKeeGZ8hDK2ojFcCAwEAAaOBkTCBjjAOBgNVHQ8BAf8EBAMCBaAwEwYDVR0l\n \ logger.go:42: 07:00:27 | watcher-tls-certs-change/1-deploy-with-tlse | BAwwCgYIKwYBBQUHAwEwDAYDVR0TAQH/BAIwADAfBgNVHSMEGDAWgBSEvgxWYqXo\n \ logger.go:42: 07:00:27 | watcher-tls-certs-change/1-deploy-with-tlse | VaasRUQLJzZAsO/HJTA4BgNVHREBAf8ELjAsgip3YXRjaGVyLWludGVybmFsLndh\n \ logger.go:42: 07:00:27 | watcher-tls-certs-change/1-deploy-with-tlse | dGNoZXIta3V0dGwtZGVmYXVsdC5zdmMwDQYJKoZIhvcNAQEMBQADggGBAJzAl8kI\n \ logger.go:42: 07:00:27 | watcher-tls-certs-change/1-deploy-with-tlse | fRCmnFBnPIq9+3XfefG0pybCCtUusBuoATJYDHmH/379Rb4zeN6bqXg7PP36qVao\n \ logger.go:42: 07:00:27 | watcher-tls-certs-change/1-deploy-with-tlse | 98lAd5LFEieOBPJq2+QXo8CWrLcbItzbOHdFeVStfpuQqaD2OMYLLepP/IpAKT+0\n \ logger.go:42: 07:00:27 | watcher-tls-certs-change/1-deploy-with-tlse | +GEHqjJsaHF3xj6g5HcvQzhSaQh3Pg8ffKmqJKQg95PF1BPQ110MawKYh9MBibPr\n \ logger.go:42: 07:00:27 | watcher-tls-certs-change/1-deploy-with-tlse | b0myrALIel2CeIeWqm+VMrMI8en/gFxJWTfg58GcQbRiUDzWa2BHE1mEoXmOLNDt\n \ logger.go:42: 07:00:27 | watcher-tls-certs-change/1-deploy-with-tlse | L1iYWnFugdfAycs2+uAkPMbpav3Oi3WJX1N1iKRklNv5Cq8oQ++b8aoR2RZPnJJE\n \ logger.go:42: 07:00:27 | watcher-tls-certs-change/1-deploy-with-tlse | ppmkEBU4Kt94NO5B/A2PnjkM8+yOQjhjVnyD5Ccp8kHx0MN7H1OW9J0m4JEPjP/c\n \ logger.go:42: 07:00:27 | watcher-tls-certs-change/1-deploy-with-tlse | uRTxWb4yoJl6Igdfy3tYgldteKztKgf4tDlkR/Htsf8GVL+zbsnTseYY5BakyygP\n \ logger.go:42: 07:00:27 | watcher-tls-certs-change/1-deploy-with-tlse | AHnxS0D1QCA5Zv5craVd+a2ZCNvbnSdl4pwShL796A0cNZGnAKDD1LwcYQ==\n \ logger.go:42: 07:00:27 | watcher-tls-certs-change/1-deploy-with-tlse | -----END CERTIFICATE-----' '!=' '-----BEGIN CERTIFICATE-----\n logger.go:42: 07:00:27 | watcher-tls-certs-change/1-deploy-with-tlse | MIIDtzCCAh+gAwIBAgIRAMdtA66/UVErgQ6jB2Mo7g8wDQYJKoZIhvcNAQEMBQAw\n \ logger.go:42: 07:00:27 | watcher-tls-certs-change/1-deploy-with-tlse | GjEYMBYGA1UEAxMPcm9vdGNhLWludGVybmFsMB4XDTI2MDEyMjA3MDAwN1oXDTMx\n \ logger.go:42: 07:00:27 | watcher-tls-certs-change/1-deploy-with-tlse | MDEyMTA3MDAwN1owADCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAOzY\n \ logger.go:42: 07:00:27 | watcher-tls-certs-change/1-deploy-with-tlse | hUVeRLlVB60sJbdpaQAJI2vpbeEEDa91MsWoYlWF58nvtb17k8KGvpPNemDF4UvD\n \ logger.go:42: 07:00:27 | watcher-tls-certs-change/1-deploy-with-tlse | kRm+oSlbUhWwlbwI1Q3VFz4JhvU12PvdUhV4e5PbNGULUDGpAbr04w098HEQSUj+\n \ logger.go:42: 07:00:27 | watcher-tls-certs-change/1-deploy-with-tlse | D/44rQFA0PbvLa//wISJcF6NG2kZiiAFAlrfYJH5tJM2nbdDOvHRY2FScNwLw3aE\n \ logger.go:42: 07:00:27 | watcher-tls-certs-change/1-deploy-with-tlse | 1UGIp9tmMA1on0l+9tBcndNzSq8klq5TdwDo8Pwrqz8m37ZXvwD1gizzDXwZhiPN\n \ logger.go:42: 07:00:27 | watcher-tls-certs-change/1-deploy-with-tlse | KIvLKp0hAxvswvj5Rib80zGQX6JjJEcOBFZ4GT/e8WUjrsJ6JsTygAtF9Y8xwchO\n \ logger.go:42: 07:00:27 | watcher-tls-certs-change/1-deploy-with-tlse | Ce4xdKeeGZ8hDK2ojFcCAwEAAaOBkTCBjjAOBgNVHQ8BAf8EBAMCBaAwEwYDVR0l\n \ logger.go:42: 07:00:27 | watcher-tls-certs-change/1-deploy-with-tlse | BAwwCgYIKwYBBQUHAwEwDAYDVR0TAQH/BAIwADAfBgNVHSMEGDAWgBSEvgxWYqXo\n \ logger.go:42: 07:00:27 | watcher-tls-certs-change/1-deploy-with-tlse | VaasRUQLJzZAsO/HJTA4BgNVHREBAf8ELjAsgip3YXRjaGVyLWludGVybmFsLndh\n \ logger.go:42: 07:00:27 | watcher-tls-certs-change/1-deploy-with-tlse | dGNoZXIta3V0dGwtZGVmYXVsdC5zdmMwDQYJKoZIhvcNAQEMBQADggGBAJzAl8kI\n \ logger.go:42: 07:00:27 | watcher-tls-certs-change/1-deploy-with-tlse | fRCmnFBnPIq9+3XfefG0pybCCtUusBuoATJYDHmH/379Rb4zeN6bqXg7PP36qVao\n \ logger.go:42: 07:00:27 | watcher-tls-certs-change/1-deploy-with-tlse | 98lAd5LFEieOBPJq2+QXo8CWrLcbItzbOHdFeVStfpuQqaD2OMYLLepP/IpAKT+0\n \ logger.go:42: 07:00:27 | watcher-tls-certs-change/1-deploy-with-tlse | +GEHqjJsaHF3xj6g5HcvQzhSaQh3Pg8ffKmqJKQg95PF1BPQ110MawKYh9MBibPr\n \ logger.go:42: 07:00:27 | watcher-tls-certs-change/1-deploy-with-tlse | b0myrALIel2CeIeWqm+VMrMI8en/gFxJWTfg58GcQbRiUDzWa2BHE1mEoXmOLNDt\n \ logger.go:42: 07:00:27 | watcher-tls-certs-change/1-deploy-with-tlse | L1iYWnFugdfAycs2+uAkPMbpav3Oi3WJX1N1iKRklNv5Cq8oQ++b8aoR2RZPnJJE\n \ logger.go:42: 07:00:27 | watcher-tls-certs-change/1-deploy-with-tlse | ppmkEBU4Kt94NO5B/A2PnjkM8+yOQjhjVnyD5Ccp8kHx0MN7H1OW9J0m4JEPjP/c\n \ logger.go:42: 07:00:27 | watcher-tls-certs-change/1-deploy-with-tlse | uRTxWb4yoJl6Igdfy3tYgldteKztKgf4tDlkR/Htsf8GVL+zbsnTseYY5BakyygP\n \ logger.go:42: 07:00:27 | watcher-tls-certs-change/1-deploy-with-tlse | AHnxS0D1QCA5Zv5craVd+a2ZCNvbnSdl4pwShL796A0cNZGnAKDD1LwcYQ==\n \ logger.go:42: 07:00:27 | watcher-tls-certs-change/1-deploy-with-tlse | -----END CERTIFICATE-----' ']'\n logger.go:42: 07:00:28 | watcher-tls-certs-change/1-deploy-with-tlse | running command: [sh -c set -euxo pipefail\n public_svc_cert=$(oc rsh -n $NAMESPACE -c watcher-api watcher-kuttl-api-0 cat /etc/pki/tls/certs/public.crt)\n \ public_secret_cert=$(oc get -n $NAMESPACE secret cert-watcher-public-svc -o jsonpath='{.data.tls\\.crt}' | base64 --decode)\n # ensure that the svc secret and cert secret match\n if [ \"${public_svc_cert}\" != \"${public_secret_cert}\" ]; then\n exit 1\n fi\n \n internal_svc_cert=$(oc rsh -n $NAMESPACE -c watcher-api watcher-kuttl-api-0 cat /etc/pki/tls/certs/internal.crt)\n \ internal_secret_cert=$(oc get -n $NAMESPACE secret cert-watcher-internal-svc -o jsonpath='{.data.tls\\.crt}' | base64 --decode)\n # ensure that the svc secret and cert secret match\n if [ \"${internal_svc_cert}\" != \"${internal_secret_cert}\" ]; then\n exit 1\n fi\n ]\n logger.go:42: 07:00:28 | watcher-tls-certs-change/1-deploy-with-tlse | ++ oc rsh -n watcher-kuttl-default -c watcher-api watcher-kuttl-api-0 cat /etc/pki/tls/certs/public.crt\n \ logger.go:42: 07:00:28 | watcher-tls-certs-change/1-deploy-with-tlse | + public_svc_cert='-----BEGIN CERTIFICATE-----\n logger.go:42: 07:00:28 | watcher-tls-certs-change/1-deploy-with-tlse | MIIDsjCCAhqgAwIBAgIQFVNmGwlW4BZq0NMjqP97cTANBgkqhkiG9w0BAQwFADAY\n logger.go:42: 07:00:28 | watcher-tls-certs-change/1-deploy-with-tlse | MRYwFAYDVQQDEw1yb290Y2EtcHVibGljMB4XDTI2MDEyMjA3MDAwN1oXDTMxMDEy\n \ logger.go:42: 07:00:28 | watcher-tls-certs-change/1-deploy-with-tlse | MTA3MDAwN1owADCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAMpNXlNH\n \ logger.go:42: 07:00:28 | watcher-tls-certs-change/1-deploy-with-tlse | +PgDkPuiPtY2Iy+iOVTneCp31gtcOAI8JIp8fgCvk7KwdimyxIUQ2cYxsZPnG0cy\n \ logger.go:42: 07:00:28 | watcher-tls-certs-change/1-deploy-with-tlse | wL4XpOQoqp0ElzP3DPP1FUOAnx7ctKdash5DUzBT+84oA2Hab8yd3D9RWVXgYTYP\n \ logger.go:42: 07:00:28 | watcher-tls-certs-change/1-deploy-with-tlse | tgSvfD60T1dJeiiJa5kBj0Gfj+0gKtYNueYrOo65hlC6Smd2tP/7av+xLtQ2b1JS\n \ logger.go:42: 07:00:28 | watcher-tls-certs-change/1-deploy-with-tlse | tFAWNznSY+2n+8SyIH6jjDb/oBYfANPOIcfU/4X8dTGzLgkpMMXK3KyXfzgD9EW+\n \ logger.go:42: 07:00:28 | watcher-tls-certs-change/1-deploy-with-tlse | HexV+da9g5Ix/I5gxku5bd0DWz/ovW+1o/uYMgB5ss9oqd8YKKSqkptXfAtoGToA\n \ logger.go:42: 07:00:28 | watcher-tls-certs-change/1-deploy-with-tlse | KG8QccenVwR7kEcCAwEAAaOBjzCBjDAOBgNVHQ8BAf8EBAMCBaAwEwYDVR0lBAww\n \ logger.go:42: 07:00:28 | watcher-tls-certs-change/1-deploy-with-tlse | CgYIKwYBBQUHAwEwDAYDVR0TAQH/BAIwADAfBgNVHSMEGDAWgBSy3xzAKuvmrfpX\n \ logger.go:42: 07:00:28 | watcher-tls-certs-change/1-deploy-with-tlse | EgmGJVm/ieLg8TA2BgNVHREBAf8ELDAqgih3YXRjaGVyLXB1YmxpYy53YXRjaGVy\n \ logger.go:42: 07:00:28 | watcher-tls-certs-change/1-deploy-with-tlse | LWt1dHRsLWRlZmF1bHQuc3ZjMA0GCSqGSIb3DQEBDAUAA4IBgQBmitaOVryjB1Fy\n \ logger.go:42: 07:00:28 | watcher-tls-certs-change/1-deploy-with-tlse | o+uQ7q1CfFEXu4uk57VffzLi6GFAot2u9gqfKm5Bx/JFKe0HvUHxfyum8oGhg8Mm\n \ logger.go:42: 07:00:28 | watcher-tls-certs-change/1-deploy-with-tlse | lF7bosLeepyWZeFYbXLZ3eCDrTyQRPPc8j6O8R9Ygg2wvIhwIrADK1/KTQuPVFW5\n \ logger.go:42: 07:00:28 | watcher-tls-certs-change/1-deploy-with-tlse | DVaC7MERErbmczp6DYiELdgcqlJimK3Vp/11qNtk1Kc7wVLI2Y36CzhwVMXOBW00\n \ logger.go:42: 07:00:28 | watcher-tls-certs-change/1-deploy-with-tlse | XXQxvYoWLH8WjhVMN9L0ZfTLdQIwL4gt0B3R7Uh8HR/qMCwW4RhIByG/el6wH1yh\n \ logger.go:42: 07:00:28 | watcher-tls-certs-change/1-deploy-with-tlse | D5gkY2Jkbf1vvmmw6MBLkx5O8aL145sW8fDKMGrmRj+eO5ALLeYMdmQZXqwZ4URw\n \ logger.go:42: 07:00:28 | watcher-tls-certs-change/1-deploy-with-tlse | X1tyjzN1ntrcBJf6xnuMLfT1v9RPMCT4CPeb23M5lqvwQXdrbiarkLQ9n/mBPUA6\n \ logger.go:42: 07:00:28 | watcher-tls-certs-change/1-deploy-with-tlse | lUYWBkFqYg9XnAgkWTeehQ7+AeOtvhj6gu3st6nEkOjJs9ri3Jris8kGL9Ay5yMy\n \ logger.go:42: 07:00:28 | watcher-tls-certs-change/1-deploy-with-tlse | af6ByIVzqJuVxr9GJiFIvhx+t2NEelQUiFnN1465Q67x/AELrfo=\n \ logger.go:42: 07:00:28 | watcher-tls-certs-change/1-deploy-with-tlse | -----END CERTIFICATE-----'\n logger.go:42: 07:00:28 | watcher-tls-certs-change/1-deploy-with-tlse | ++ oc get -n watcher-kuttl-default secret cert-watcher-public-svc -o 'jsonpath={.data.tls\\.crt}'\n \ logger.go:42: 07:00:28 | watcher-tls-certs-change/1-deploy-with-tlse | ++ base64 --decode\n logger.go:42: 07:00:29 | watcher-tls-certs-change/1-deploy-with-tlse | + public_secret_cert='-----BEGIN CERTIFICATE-----\n logger.go:42: 07:00:29 | watcher-tls-certs-change/1-deploy-with-tlse | MIIDsjCCAhqgAwIBAgIQFVNmGwlW4BZq0NMjqP97cTANBgkqhkiG9w0BAQwFADAY\n \ logger.go:42: 07:00:29 | watcher-tls-certs-change/1-deploy-with-tlse | MRYwFAYDVQQDEw1yb290Y2EtcHVibGljMB4XDTI2MDEyMjA3MDAwN1oXDTMxMDEy\n \ logger.go:42: 07:00:29 | watcher-tls-certs-change/1-deploy-with-tlse | MTA3MDAwN1owADCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAMpNXlNH\n \ logger.go:42: 07:00:29 | watcher-tls-certs-change/1-deploy-with-tlse | +PgDkPuiPtY2Iy+iOVTneCp31gtcOAI8JIp8fgCvk7KwdimyxIUQ2cYxsZPnG0cy\n \ logger.go:42: 07:00:29 | watcher-tls-certs-change/1-deploy-with-tlse | wL4XpOQoqp0ElzP3DPP1FUOAnx7ctKdash5DUzBT+84oA2Hab8yd3D9RWVXgYTYP\n \ logger.go:42: 07:00:29 | watcher-tls-certs-change/1-deploy-with-tlse | tgSvfD60T1dJeiiJa5kBj0Gfj+0gKtYNueYrOo65hlC6Smd2tP/7av+xLtQ2b1JS\n \ logger.go:42: 07:00:29 | watcher-tls-certs-change/1-deploy-with-tlse | tFAWNznSY+2n+8SyIH6jjDb/oBYfANPOIcfU/4X8dTGzLgkpMMXK3KyXfzgD9EW+\n \ logger.go:42: 07:00:29 | watcher-tls-certs-change/1-deploy-with-tlse | HexV+da9g5Ix/I5gxku5bd0DWz/ovW+1o/uYMgB5ss9oqd8YKKSqkptXfAtoGToA\n \ logger.go:42: 07:00:29 | watcher-tls-certs-change/1-deploy-with-tlse | KG8QccenVwR7kEcCAwEAAaOBjzCBjDAOBgNVHQ8BAf8EBAMCBaAwEwYDVR0lBAww\n \ logger.go:42: 07:00:29 | watcher-tls-certs-change/1-deploy-with-tlse | CgYIKwYBBQUHAwEwDAYDVR0TAQH/BAIwADAfBgNVHSMEGDAWgBSy3xzAKuvmrfpX\n \ logger.go:42: 07:00:29 | watcher-tls-certs-change/1-deploy-with-tlse | EgmGJVm/ieLg8TA2BgNVHREBAf8ELDAqgih3YXRjaGVyLXB1YmxpYy53YXRjaGVy\n \ logger.go:42: 07:00:29 | watcher-tls-certs-change/1-deploy-with-tlse | LWt1dHRsLWRlZmF1bHQuc3ZjMA0GCSqGSIb3DQEBDAUAA4IBgQBmitaOVryjB1Fy\n \ logger.go:42: 07:00:29 | watcher-tls-certs-change/1-deploy-with-tlse | o+uQ7q1CfFEXu4uk57VffzLi6GFAot2u9gqfKm5Bx/JFKe0HvUHxfyum8oGhg8Mm\n \ logger.go:42: 07:00:29 | watcher-tls-certs-change/1-deploy-with-tlse | lF7bosLeepyWZeFYbXLZ3eCDrTyQRPPc8j6O8R9Ygg2wvIhwIrADK1/KTQuPVFW5\n \ logger.go:42: 07:00:29 | watcher-tls-certs-change/1-deploy-with-tlse | DVaC7MERErbmczp6DYiELdgcqlJimK3Vp/11qNtk1Kc7wVLI2Y36CzhwVMXOBW00\n \ logger.go:42: 07:00:29 | watcher-tls-certs-change/1-deploy-with-tlse | XXQxvYoWLH8WjhVMN9L0ZfTLdQIwL4gt0B3R7Uh8HR/qMCwW4RhIByG/el6wH1yh\n \ logger.go:42: 07:00:29 | watcher-tls-certs-change/1-deploy-with-tlse | D5gkY2Jkbf1vvmmw6MBLkx5O8aL145sW8fDKMGrmRj+eO5ALLeYMdmQZXqwZ4URw\n \ logger.go:42: 07:00:29 | watcher-tls-certs-change/1-deploy-with-tlse | X1tyjzN1ntrcBJf6xnuMLfT1v9RPMCT4CPeb23M5lqvwQXdrbiarkLQ9n/mBPUA6\n \ logger.go:42: 07:00:29 | watcher-tls-certs-change/1-deploy-with-tlse | lUYWBkFqYg9XnAgkWTeehQ7+AeOtvhj6gu3st6nEkOjJs9ri3Jris8kGL9Ay5yMy\n \ logger.go:42: 07:00:29 | watcher-tls-certs-change/1-deploy-with-tlse | af6ByIVzqJuVxr9GJiFIvhx+t2NEelQUiFnN1465Q67x/AELrfo=\n \ logger.go:42: 07:00:29 | watcher-tls-certs-change/1-deploy-with-tlse | -----END CERTIFICATE-----'\n logger.go:42: 07:00:29 | watcher-tls-certs-change/1-deploy-with-tlse | + '[' '-----BEGIN CERTIFICATE-----\n logger.go:42: 07:00:29 | watcher-tls-certs-change/1-deploy-with-tlse | MIIDsjCCAhqgAwIBAgIQFVNmGwlW4BZq0NMjqP97cTANBgkqhkiG9w0BAQwFADAY\n logger.go:42: 07:00:29 | watcher-tls-certs-change/1-deploy-with-tlse | MRYwFAYDVQQDEw1yb290Y2EtcHVibGljMB4XDTI2MDEyMjA3MDAwN1oXDTMxMDEy\n \ logger.go:42: 07:00:29 | watcher-tls-certs-change/1-deploy-with-tlse | MTA3MDAwN1owADCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAMpNXlNH\n \ logger.go:42: 07:00:29 | watcher-tls-certs-change/1-deploy-with-tlse | +PgDkPuiPtY2Iy+iOVTneCp31gtcOAI8JIp8fgCvk7KwdimyxIUQ2cYxsZPnG0cy\n \ logger.go:42: 07:00:29 | watcher-tls-certs-change/1-deploy-with-tlse | wL4XpOQoqp0ElzP3DPP1FUOAnx7ctKdash5DUzBT+84oA2Hab8yd3D9RWVXgYTYP\n \ logger.go:42: 07:00:29 | watcher-tls-certs-change/1-deploy-with-tlse | tgSvfD60T1dJeiiJa5kBj0Gfj+0gKtYNueYrOo65hlC6Smd2tP/7av+xLtQ2b1JS\n \ logger.go:42: 07:00:29 | watcher-tls-certs-change/1-deploy-with-tlse | tFAWNznSY+2n+8SyIH6jjDb/oBYfANPOIcfU/4X8dTGzLgkpMMXK3KyXfzgD9EW+\n \ logger.go:42: 07:00:29 | watcher-tls-certs-change/1-deploy-with-tlse | HexV+da9g5Ix/I5gxku5bd0DWz/ovW+1o/uYMgB5ss9oqd8YKKSqkptXfAtoGToA\n \ logger.go:42: 07:00:29 | watcher-tls-certs-change/1-deploy-with-tlse | KG8QccenVwR7kEcCAwEAAaOBjzCBjDAOBgNVHQ8BAf8EBAMCBaAwEwYDVR0lBAww\n \ logger.go:42: 07:00:29 | watcher-tls-certs-change/1-deploy-with-tlse | CgYIKwYBBQUHAwEwDAYDVR0TAQH/BAIwADAfBgNVHSMEGDAWgBSy3xzAKuvmrfpX\n \ logger.go:42: 07:00:29 | watcher-tls-certs-change/1-deploy-with-tlse | EgmGJVm/ieLg8TA2BgNVHREBAf8ELDAqgih3YXRjaGVyLXB1YmxpYy53YXRjaGVy\n \ logger.go:42: 07:00:29 | watcher-tls-certs-change/1-deploy-with-tlse | LWt1dHRsLWRlZmF1bHQuc3ZjMA0GCSqGSIb3DQEBDAUAA4IBgQBmitaOVryjB1Fy\n \ logger.go:42: 07:00:29 | watcher-tls-certs-change/1-deploy-with-tlse | o+uQ7q1CfFEXu4uk57VffzLi6GFAot2u9gqfKm5Bx/JFKe0HvUHxfyum8oGhg8Mm\n \ logger.go:42: 07:00:29 | watcher-tls-certs-change/1-deploy-with-tlse | lF7bosLeepyWZeFYbXLZ3eCDrTyQRPPc8j6O8R9Ygg2wvIhwIrADK1/KTQuPVFW5\n \ logger.go:42: 07:00:29 | watcher-tls-certs-change/1-deploy-with-tlse | DVaC7MERErbmczp6DYiELdgcqlJimK3Vp/11qNtk1Kc7wVLI2Y36CzhwVMXOBW00\n \ logger.go:42: 07:00:29 | watcher-tls-certs-change/1-deploy-with-tlse | XXQxvYoWLH8WjhVMN9L0ZfTLdQIwL4gt0B3R7Uh8HR/qMCwW4RhIByG/el6wH1yh\n \ logger.go:42: 07:00:29 | watcher-tls-certs-change/1-deploy-with-tlse | D5gkY2Jkbf1vvmmw6MBLkx5O8aL145sW8fDKMGrmRj+eO5ALLeYMdmQZXqwZ4URw\n \ logger.go:42: 07:00:29 | watcher-tls-certs-change/1-deploy-with-tlse | X1tyjzN1ntrcBJf6xnuMLfT1v9RPMCT4CPeb23M5lqvwQXdrbiarkLQ9n/mBPUA6\n \ logger.go:42: 07:00:29 | watcher-tls-certs-change/1-deploy-with-tlse | lUYWBkFqYg9XnAgkWTeehQ7+AeOtvhj6gu3st6nEkOjJs9ri3Jris8kGL9Ay5yMy\n \ logger.go:42: 07:00:29 | watcher-tls-certs-change/1-deploy-with-tlse | af6ByIVzqJuVxr9GJiFIvhx+t2NEelQUiFnN1465Q67x/AELrfo=\n \ logger.go:42: 07:00:29 | watcher-tls-certs-change/1-deploy-with-tlse | -----END CERTIFICATE-----' '!=' '-----BEGIN CERTIFICATE-----\n logger.go:42: 07:00:29 | watcher-tls-certs-change/1-deploy-with-tlse | MIIDsjCCAhqgAwIBAgIQFVNmGwlW4BZq0NMjqP97cTANBgkqhkiG9w0BAQwFADAY\n \ logger.go:42: 07:00:29 | watcher-tls-certs-change/1-deploy-with-tlse | MRYwFAYDVQQDEw1yb290Y2EtcHVibGljMB4XDTI2MDEyMjA3MDAwN1oXDTMxMDEy\n \ logger.go:42: 07:00:29 | watcher-tls-certs-change/1-deploy-with-tlse | MTA3MDAwN1owADCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAMpNXlNH\n \ logger.go:42: 07:00:29 | watcher-tls-certs-change/1-deploy-with-tlse | +PgDkPuiPtY2Iy+iOVTneCp31gtcOAI8JIp8fgCvk7KwdimyxIUQ2cYxsZPnG0cy\n \ logger.go:42: 07:00:29 | watcher-tls-certs-change/1-deploy-with-tlse | wL4XpOQoqp0ElzP3DPP1FUOAnx7ctKdash5DUzBT+84oA2Hab8yd3D9RWVXgYTYP\n \ logger.go:42: 07:00:29 | watcher-tls-certs-change/1-deploy-with-tlse | tgSvfD60T1dJeiiJa5kBj0Gfj+0gKtYNueYrOo65hlC6Smd2tP/7av+xLtQ2b1JS\n \ logger.go:42: 07:00:29 | watcher-tls-certs-change/1-deploy-with-tlse | tFAWNznSY+2n+8SyIH6jjDb/oBYfANPOIcfU/4X8dTGzLgkpMMXK3KyXfzgD9EW+\n \ logger.go:42: 07:00:29 | watcher-tls-certs-change/1-deploy-with-tlse | HexV+da9g5Ix/I5gxku5bd0DWz/ovW+1o/uYMgB5ss9oqd8YKKSqkptXfAtoGToA\n \ logger.go:42: 07:00:29 | watcher-tls-certs-change/1-deploy-with-tlse | KG8QccenVwR7kEcCAwEAAaOBjzCBjDAOBgNVHQ8BAf8EBAMCBaAwEwYDVR0lBAww\n \ logger.go:42: 07:00:29 | watcher-tls-certs-change/1-deploy-with-tlse | CgYIKwYBBQUHAwEwDAYDVR0TAQH/BAIwADAfBgNVHSMEGDAWgBSy3xzAKuvmrfpX\n \ logger.go:42: 07:00:29 | watcher-tls-certs-change/1-deploy-with-tlse | EgmGJVm/ieLg8TA2BgNVHREBAf8ELDAqgih3YXRjaGVyLXB1YmxpYy53YXRjaGVy\n \ logger.go:42: 07:00:29 | watcher-tls-certs-change/1-deploy-with-tlse | LWt1dHRsLWRlZmF1bHQuc3ZjMA0GCSqGSIb3DQEBDAUAA4IBgQBmitaOVryjB1Fy\n \ logger.go:42: 07:00:29 | watcher-tls-certs-change/1-deploy-with-tlse | o+uQ7q1CfFEXu4uk57VffzLi6GFAot2u9gqfKm5Bx/JFKe0HvUHxfyum8oGhg8Mm\n \ logger.go:42: 07:00:29 | watcher-tls-certs-change/1-deploy-with-tlse | lF7bosLeepyWZeFYbXLZ3eCDrTyQRPPc8j6O8R9Ygg2wvIhwIrADK1/KTQuPVFW5\n \ logger.go:42: 07:00:29 | watcher-tls-certs-change/1-deploy-with-tlse | DVaC7MERErbmczp6DYiELdgcqlJimK3Vp/11qNtk1Kc7wVLI2Y36CzhwVMXOBW00\n \ logger.go:42: 07:00:29 | watcher-tls-certs-change/1-deploy-with-tlse | XXQxvYoWLH8WjhVMN9L0ZfTLdQIwL4gt0B3R7Uh8HR/qMCwW4RhIByG/el6wH1yh\n \ logger.go:42: 07:00:29 | watcher-tls-certs-change/1-deploy-with-tlse | D5gkY2Jkbf1vvmmw6MBLkx5O8aL145sW8fDKMGrmRj+eO5ALLeYMdmQZXqwZ4URw\n \ logger.go:42: 07:00:29 | watcher-tls-certs-change/1-deploy-with-tlse | X1tyjzN1ntrcBJf6xnuMLfT1v9RPMCT4CPeb23M5lqvwQXdrbiarkLQ9n/mBPUA6\n \ logger.go:42: 07:00:29 | watcher-tls-certs-change/1-deploy-with-tlse | lUYWBkFqYg9XnAgkWTeehQ7+AeOtvhj6gu3st6nEkOjJs9ri3Jris8kGL9Ay5yMy\n \ logger.go:42: 07:00:29 | watcher-tls-certs-change/1-deploy-with-tlse | af6ByIVzqJuVxr9GJiFIvhx+t2NEelQUiFnN1465Q67x/AELrfo=\n \ logger.go:42: 07:00:29 | watcher-tls-certs-change/1-deploy-with-tlse | -----END CERTIFICATE-----' ']'\n logger.go:42: 07:00:29 | watcher-tls-certs-change/1-deploy-with-tlse | ++ oc rsh -n watcher-kuttl-default -c watcher-api watcher-kuttl-api-0 cat /etc/pki/tls/certs/internal.crt\n \ logger.go:42: 07:00:29 | watcher-tls-certs-change/1-deploy-with-tlse | + internal_svc_cert='-----BEGIN CERTIFICATE-----\n logger.go:42: 07:00:29 | watcher-tls-certs-change/1-deploy-with-tlse | MIIDtzCCAh+gAwIBAgIRAMdtA66/UVErgQ6jB2Mo7g8wDQYJKoZIhvcNAQEMBQAw\n logger.go:42: 07:00:29 | watcher-tls-certs-change/1-deploy-with-tlse | GjEYMBYGA1UEAxMPcm9vdGNhLWludGVybmFsMB4XDTI2MDEyMjA3MDAwN1oXDTMx\n \ logger.go:42: 07:00:29 | watcher-tls-certs-change/1-deploy-with-tlse | MDEyMTA3MDAwN1owADCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAOzY\n \ logger.go:42: 07:00:29 | watcher-tls-certs-change/1-deploy-with-tlse | hUVeRLlVB60sJbdpaQAJI2vpbeEEDa91MsWoYlWF58nvtb17k8KGvpPNemDF4UvD\n \ logger.go:42: 07:00:29 | watcher-tls-certs-change/1-deploy-with-tlse | kRm+oSlbUhWwlbwI1Q3VFz4JhvU12PvdUhV4e5PbNGULUDGpAbr04w098HEQSUj+\n \ logger.go:42: 07:00:29 | watcher-tls-certs-change/1-deploy-with-tlse | D/44rQFA0PbvLa//wISJcF6NG2kZiiAFAlrfYJH5tJM2nbdDOvHRY2FScNwLw3aE\n \ logger.go:42: 07:00:29 | watcher-tls-certs-change/1-deploy-with-tlse | 1UGIp9tmMA1on0l+9tBcndNzSq8klq5TdwDo8Pwrqz8m37ZXvwD1gizzDXwZhiPN\n \ logger.go:42: 07:00:29 | watcher-tls-certs-change/1-deploy-with-tlse | KIvLKp0hAxvswvj5Rib80zGQX6JjJEcOBFZ4GT/e8WUjrsJ6JsTygAtF9Y8xwchO\n \ logger.go:42: 07:00:29 | watcher-tls-certs-change/1-deploy-with-tlse | Ce4xdKeeGZ8hDK2ojFcCAwEAAaOBkTCBjjAOBgNVHQ8BAf8EBAMCBaAwEwYDVR0l\n \ logger.go:42: 07:00:29 | watcher-tls-certs-change/1-deploy-with-tlse | BAwwCgYIKwYBBQUHAwEwDAYDVR0TAQH/BAIwADAfBgNVHSMEGDAWgBSEvgxWYqXo\n \ logger.go:42: 07:00:29 | watcher-tls-certs-change/1-deploy-with-tlse | VaasRUQLJzZAsO/HJTA4BgNVHREBAf8ELjAsgip3YXRjaGVyLWludGVybmFsLndh\n \ logger.go:42: 07:00:29 | watcher-tls-certs-change/1-deploy-with-tlse | dGNoZXIta3V0dGwtZGVmYXVsdC5zdmMwDQYJKoZIhvcNAQEMBQADggGBAJzAl8kI\n \ logger.go:42: 07:00:29 | watcher-tls-certs-change/1-deploy-with-tlse | fRCmnFBnPIq9+3XfefG0pybCCtUusBuoATJYDHmH/379Rb4zeN6bqXg7PP36qVao\n \ logger.go:42: 07:00:29 | watcher-tls-certs-change/1-deploy-with-tlse | 98lAd5LFEieOBPJq2+QXo8CWrLcbItzbOHdFeVStfpuQqaD2OMYLLepP/IpAKT+0\n \ logger.go:42: 07:00:29 | watcher-tls-certs-change/1-deploy-with-tlse | +GEHqjJsaHF3xj6g5HcvQzhSaQh3Pg8ffKmqJKQg95PF1BPQ110MawKYh9MBibPr\n \ logger.go:42: 07:00:29 | watcher-tls-certs-change/1-deploy-with-tlse | b0myrALIel2CeIeWqm+VMrMI8en/gFxJWTfg58GcQbRiUDzWa2BHE1mEoXmOLNDt\n \ logger.go:42: 07:00:29 | watcher-tls-certs-change/1-deploy-with-tlse | L1iYWnFugdfAycs2+uAkPMbpav3Oi3WJX1N1iKRklNv5Cq8oQ++b8aoR2RZPnJJE\n \ logger.go:42: 07:00:29 | watcher-tls-certs-change/1-deploy-with-tlse | ppmkEBU4Kt94NO5B/A2PnjkM8+yOQjhjVnyD5Ccp8kHx0MN7H1OW9J0m4JEPjP/c\n \ logger.go:42: 07:00:29 | watcher-tls-certs-change/1-deploy-with-tlse | uRTxWb4yoJl6Igdfy3tYgldteKztKgf4tDlkR/Htsf8GVL+zbsnTseYY5BakyygP\n \ logger.go:42: 07:00:29 | watcher-tls-certs-change/1-deploy-with-tlse | AHnxS0D1QCA5Zv5craVd+a2ZCNvbnSdl4pwShL796A0cNZGnAKDD1LwcYQ==\n \ logger.go:42: 07:00:29 | watcher-tls-certs-change/1-deploy-with-tlse | -----END CERTIFICATE-----'\n logger.go:42: 07:00:29 | watcher-tls-certs-change/1-deploy-with-tlse | ++ oc get -n watcher-kuttl-default secret cert-watcher-internal-svc -o 'jsonpath={.data.tls\\.crt}'\n \ logger.go:42: 07:00:29 | watcher-tls-certs-change/1-deploy-with-tlse | ++ base64 --decode\n logger.go:42: 07:00:29 | watcher-tls-certs-change/1-deploy-with-tlse | + internal_secret_cert='-----BEGIN CERTIFICATE-----\n logger.go:42: 07:00:29 | watcher-tls-certs-change/1-deploy-with-tlse | MIIDtzCCAh+gAwIBAgIRAMdtA66/UVErgQ6jB2Mo7g8wDQYJKoZIhvcNAQEMBQAw\n \ logger.go:42: 07:00:29 | watcher-tls-certs-change/1-deploy-with-tlse | GjEYMBYGA1UEAxMPcm9vdGNhLWludGVybmFsMB4XDTI2MDEyMjA3MDAwN1oXDTMx\n \ logger.go:42: 07:00:29 | watcher-tls-certs-change/1-deploy-with-tlse | MDEyMTA3MDAwN1owADCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAOzY\n \ logger.go:42: 07:00:29 | watcher-tls-certs-change/1-deploy-with-tlse | hUVeRLlVB60sJbdpaQAJI2vpbeEEDa91MsWoYlWF58nvtb17k8KGvpPNemDF4UvD\n \ logger.go:42: 07:00:29 | watcher-tls-certs-change/1-deploy-with-tlse | kRm+oSlbUhWwlbwI1Q3VFz4JhvU12PvdUhV4e5PbNGULUDGpAbr04w098HEQSUj+\n \ logger.go:42: 07:00:29 | watcher-tls-certs-change/1-deploy-with-tlse | D/44rQFA0PbvLa//wISJcF6NG2kZiiAFAlrfYJH5tJM2nbdDOvHRY2FScNwLw3aE\n \ logger.go:42: 07:00:29 | watcher-tls-certs-change/1-deploy-with-tlse | 1UGIp9tmMA1on0l+9tBcndNzSq8klq5TdwDo8Pwrqz8m37ZXvwD1gizzDXwZhiPN\n \ logger.go:42: 07:00:29 | watcher-tls-certs-change/1-deploy-with-tlse | KIvLKp0hAxvswvj5Rib80zGQX6JjJEcOBFZ4GT/e8WUjrsJ6JsTygAtF9Y8xwchO\n \ logger.go:42: 07:00:29 | watcher-tls-certs-change/1-deploy-with-tlse | Ce4xdKeeGZ8hDK2ojFcCAwEAAaOBkTCBjjAOBgNVHQ8BAf8EBAMCBaAwEwYDVR0l\n \ logger.go:42: 07:00:29 | watcher-tls-certs-change/1-deploy-with-tlse | BAwwCgYIKwYBBQUHAwEwDAYDVR0TAQH/BAIwADAfBgNVHSMEGDAWgBSEvgxWYqXo\n \ logger.go:42: 07:00:29 | watcher-tls-certs-change/1-deploy-with-tlse | VaasRUQLJzZAsO/HJTA4BgNVHREBAf8ELjAsgip3YXRjaGVyLWludGVybmFsLndh\n \ logger.go:42: 07:00:29 | watcher-tls-certs-change/1-deploy-with-tlse | dGNoZXIta3V0dGwtZGVmYXVsdC5zdmMwDQYJKoZIhvcNAQEMBQADggGBAJzAl8kI\n \ logger.go:42: 07:00:29 | watcher-tls-certs-change/1-deploy-with-tlse | fRCmnFBnPIq9+3XfefG0pybCCtUusBuoATJYDHmH/379Rb4zeN6bqXg7PP36qVao\n \ logger.go:42: 07:00:29 | watcher-tls-certs-change/1-deploy-with-tlse | 98lAd5LFEieOBPJq2+QXo8CWrLcbItzbOHdFeVStfpuQqaD2OMYLLepP/IpAKT+0\n \ logger.go:42: 07:00:29 | watcher-tls-certs-change/1-deploy-with-tlse | +GEHqjJsaHF3xj6g5HcvQzhSaQh3Pg8ffKmqJKQg95PF1BPQ110MawKYh9MBibPr\n \ logger.go:42: 07:00:29 | watcher-tls-certs-change/1-deploy-with-tlse | b0myrALIel2CeIeWqm+VMrMI8en/gFxJWTfg58GcQbRiUDzWa2BHE1mEoXmOLNDt\n \ logger.go:42: 07:00:29 | watcher-tls-certs-change/1-deploy-with-tlse | L1iYWnFugdfAycs2+uAkPMbpav3Oi3WJX1N1iKRklNv5Cq8oQ++b8aoR2RZPnJJE\n \ logger.go:42: 07:00:29 | watcher-tls-certs-change/1-deploy-with-tlse | ppmkEBU4Kt94NO5B/A2PnjkM8+yOQjhjVnyD5Ccp8kHx0MN7H1OW9J0m4JEPjP/c\n \ logger.go:42: 07:00:29 | watcher-tls-certs-change/1-deploy-with-tlse | uRTxWb4yoJl6Igdfy3tYgldteKztKgf4tDlkR/Htsf8GVL+zbsnTseYY5BakyygP\n \ logger.go:42: 07:00:29 | watcher-tls-certs-change/1-deploy-with-tlse | AHnxS0D1QCA5Zv5craVd+a2ZCNvbnSdl4pwShL796A0cNZGnAKDD1LwcYQ==\n \ logger.go:42: 07:00:29 | watcher-tls-certs-change/1-deploy-with-tlse | -----END CERTIFICATE-----'\n logger.go:42: 07:00:29 | watcher-tls-certs-change/1-deploy-with-tlse | + '[' '-----BEGIN CERTIFICATE-----\n logger.go:42: 07:00:29 | watcher-tls-certs-change/1-deploy-with-tlse | MIIDtzCCAh+gAwIBAgIRAMdtA66/UVErgQ6jB2Mo7g8wDQYJKoZIhvcNAQEMBQAw\n logger.go:42: 07:00:29 | watcher-tls-certs-change/1-deploy-with-tlse | GjEYMBYGA1UEAxMPcm9vdGNhLWludGVybmFsMB4XDTI2MDEyMjA3MDAwN1oXDTMx\n \ logger.go:42: 07:00:29 | watcher-tls-certs-change/1-deploy-with-tlse | MDEyMTA3MDAwN1owADCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAOzY\n \ logger.go:42: 07:00:29 | watcher-tls-certs-change/1-deploy-with-tlse | hUVeRLlVB60sJbdpaQAJI2vpbeEEDa91MsWoYlWF58nvtb17k8KGvpPNemDF4UvD\n \ logger.go:42: 07:00:29 | watcher-tls-certs-change/1-deploy-with-tlse | kRm+oSlbUhWwlbwI1Q3VFz4JhvU12PvdUhV4e5PbNGULUDGpAbr04w098HEQSUj+\n \ logger.go:42: 07:00:29 | watcher-tls-certs-change/1-deploy-with-tlse | D/44rQFA0PbvLa//wISJcF6NG2kZiiAFAlrfYJH5tJM2nbdDOvHRY2FScNwLw3aE\n \ logger.go:42: 07:00:29 | watcher-tls-certs-change/1-deploy-with-tlse | 1UGIp9tmMA1on0l+9tBcndNzSq8klq5TdwDo8Pwrqz8m37ZXvwD1gizzDXwZhiPN\n \ logger.go:42: 07:00:29 | watcher-tls-certs-change/1-deploy-with-tlse | KIvLKp0hAxvswvj5Rib80zGQX6JjJEcOBFZ4GT/e8WUjrsJ6JsTygAtF9Y8xwchO\n \ logger.go:42: 07:00:29 | watcher-tls-certs-change/1-deploy-with-tlse | Ce4xdKeeGZ8hDK2ojFcCAwEAAaOBkTCBjjAOBgNVHQ8BAf8EBAMCBaAwEwYDVR0l\n \ logger.go:42: 07:00:29 | watcher-tls-certs-change/1-deploy-with-tlse | BAwwCgYIKwYBBQUHAwEwDAYDVR0TAQH/BAIwADAfBgNVHSMEGDAWgBSEvgxWYqXo\n \ logger.go:42: 07:00:29 | watcher-tls-certs-change/1-deploy-with-tlse | VaasRUQLJzZAsO/HJTA4BgNVHREBAf8ELjAsgip3YXRjaGVyLWludGVybmFsLndh\n \ logger.go:42: 07:00:29 | watcher-tls-certs-change/1-deploy-with-tlse | dGNoZXIta3V0dGwtZGVmYXVsdC5zdmMwDQYJKoZIhvcNAQEMBQADggGBAJzAl8kI\n \ logger.go:42: 07:00:29 | watcher-tls-certs-change/1-deploy-with-tlse | fRCmnFBnPIq9+3XfefG0pybCCtUusBuoATJYDHmH/379Rb4zeN6bqXg7PP36qVao\n \ logger.go:42: 07:00:29 | watcher-tls-certs-change/1-deploy-with-tlse | 98lAd5LFEieOBPJq2+QXo8CWrLcbItzbOHdFeVStfpuQqaD2OMYLLepP/IpAKT+0\n \ logger.go:42: 07:00:29 | watcher-tls-certs-change/1-deploy-with-tlse | +GEHqjJsaHF3xj6g5HcvQzhSaQh3Pg8ffKmqJKQg95PF1BPQ110MawKYh9MBibPr\n \ logger.go:42: 07:00:29 | watcher-tls-certs-change/1-deploy-with-tlse | b0myrALIel2CeIeWqm+VMrMI8en/gFxJWTfg58GcQbRiUDzWa2BHE1mEoXmOLNDt\n \ logger.go:42: 07:00:29 | watcher-tls-certs-change/1-deploy-with-tlse | L1iYWnFugdfAycs2+uAkPMbpav3Oi3WJX1N1iKRklNv5Cq8oQ++b8aoR2RZPnJJE\n \ logger.go:42: 07:00:29 | watcher-tls-certs-change/1-deploy-with-tlse | ppmkEBU4Kt94NO5B/A2PnjkM8+yOQjhjVnyD5Ccp8kHx0MN7H1OW9J0m4JEPjP/c\n \ logger.go:42: 07:00:29 | watcher-tls-certs-change/1-deploy-with-tlse | uRTxWb4yoJl6Igdfy3tYgldteKztKgf4tDlkR/Htsf8GVL+zbsnTseYY5BakyygP\n \ logger.go:42: 07:00:29 | watcher-tls-certs-change/1-deploy-with-tlse | AHnxS0D1QCA5Zv5craVd+a2ZCNvbnSdl4pwShL796A0cNZGnAKDD1LwcYQ==\n \ logger.go:42: 07:00:29 | watcher-tls-certs-change/1-deploy-with-tlse | -----END CERTIFICATE-----' '!=' '-----BEGIN CERTIFICATE-----\n logger.go:42: 07:00:29 | watcher-tls-certs-change/1-deploy-with-tlse | MIIDtzCCAh+gAwIBAgIRAMdtA66/UVErgQ6jB2Mo7g8wDQYJKoZIhvcNAQEMBQAw\n \ logger.go:42: 07:00:29 | watcher-tls-certs-change/1-deploy-with-tlse | GjEYMBYGA1UEAxMPcm9vdGNhLWludGVybmFsMB4XDTI2MDEyMjA3MDAwN1oXDTMx\n \ logger.go:42: 07:00:29 | watcher-tls-certs-change/1-deploy-with-tlse | MDEyMTA3MDAwN1owADCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAOzY\n \ logger.go:42: 07:00:29 | watcher-tls-certs-change/1-deploy-with-tlse | hUVeRLlVB60sJbdpaQAJI2vpbeEEDa91MsWoYlWF58nvtb17k8KGvpPNemDF4UvD\n \ logger.go:42: 07:00:29 | watcher-tls-certs-change/1-deploy-with-tlse | kRm+oSlbUhWwlbwI1Q3VFz4JhvU12PvdUhV4e5PbNGULUDGpAbr04w098HEQSUj+\n \ logger.go:42: 07:00:29 | watcher-tls-certs-change/1-deploy-with-tlse | D/44rQFA0PbvLa//wISJcF6NG2kZiiAFAlrfYJH5tJM2nbdDOvHRY2FScNwLw3aE\n \ logger.go:42: 07:00:29 | watcher-tls-certs-change/1-deploy-with-tlse | 1UGIp9tmMA1on0l+9tBcndNzSq8klq5TdwDo8Pwrqz8m37ZXvwD1gizzDXwZhiPN\n \ logger.go:42: 07:00:29 | watcher-tls-certs-change/1-deploy-with-tlse | KIvLKp0hAxvswvj5Rib80zGQX6JjJEcOBFZ4GT/e8WUjrsJ6JsTygAtF9Y8xwchO\n \ logger.go:42: 07:00:29 | watcher-tls-certs-change/1-deploy-with-tlse | Ce4xdKeeGZ8hDK2ojFcCAwEAAaOBkTCBjjAOBgNVHQ8BAf8EBAMCBaAwEwYDVR0l\n \ logger.go:42: 07:00:29 | watcher-tls-certs-change/1-deploy-with-tlse | BAwwCgYIKwYBBQUHAwEwDAYDVR0TAQH/BAIwADAfBgNVHSMEGDAWgBSEvgxWYqXo\n \ logger.go:42: 07:00:29 | watcher-tls-certs-change/1-deploy-with-tlse | VaasRUQLJzZAsO/HJTA4BgNVHREBAf8ELjAsgip3YXRjaGVyLWludGVybmFsLndh\n \ logger.go:42: 07:00:29 | watcher-tls-certs-change/1-deploy-with-tlse | dGNoZXIta3V0dGwtZGVmYXVsdC5zdmMwDQYJKoZIhvcNAQEMBQADggGBAJzAl8kI\n \ logger.go:42: 07:00:29 | watcher-tls-certs-change/1-deploy-with-tlse | fRCmnFBnPIq9+3XfefG0pybCCtUusBuoATJYDHmH/379Rb4zeN6bqXg7PP36qVao\n \ logger.go:42: 07:00:29 | watcher-tls-certs-change/1-deploy-with-tlse | 98lAd5LFEieOBPJq2+QXo8CWrLcbItzbOHdFeVStfpuQqaD2OMYLLepP/IpAKT+0\n \ logger.go:42: 07:00:29 | watcher-tls-certs-change/1-deploy-with-tlse | +GEHqjJsaHF3xj6g5HcvQzhSaQh3Pg8ffKmqJKQg95PF1BPQ110MawKYh9MBibPr\n \ logger.go:42: 07:00:29 | watcher-tls-certs-change/1-deploy-with-tlse | b0myrALIel2CeIeWqm+VMrMI8en/gFxJWTfg58GcQbRiUDzWa2BHE1mEoXmOLNDt\n \ logger.go:42: 07:00:29 | watcher-tls-certs-change/1-deploy-with-tlse | L1iYWnFugdfAycs2+uAkPMbpav3Oi3WJX1N1iKRklNv5Cq8oQ++b8aoR2RZPnJJE\n \ logger.go:42: 07:00:29 | watcher-tls-certs-change/1-deploy-with-tlse | ppmkEBU4Kt94NO5B/A2PnjkM8+yOQjhjVnyD5Ccp8kHx0MN7H1OW9J0m4JEPjP/c\n \ logger.go:42: 07:00:29 | watcher-tls-certs-change/1-deploy-with-tlse | uRTxWb4yoJl6Igdfy3tYgldteKztKgf4tDlkR/Htsf8GVL+zbsnTseYY5BakyygP\n \ logger.go:42: 07:00:29 | watcher-tls-certs-change/1-deploy-with-tlse | AHnxS0D1QCA5Zv5craVd+a2ZCNvbnSdl4pwShL796A0cNZGnAKDD1LwcYQ==\n \ logger.go:42: 07:00:29 | watcher-tls-certs-change/1-deploy-with-tlse | -----END CERTIFICATE-----' ']'\n logger.go:42: 07:00:30 | watcher-tls-certs-change/1-deploy-with-tlse | running command: [sh -c set -euxo pipefail\n public_svc_cert=$(oc rsh -n $NAMESPACE -c watcher-api watcher-kuttl-api-0 cat /etc/pki/tls/certs/public.crt)\n \ public_secret_cert=$(oc get -n $NAMESPACE secret cert-watcher-public-svc -o jsonpath='{.data.tls\\.crt}' | base64 --decode)\n # ensure that the svc secret and cert secret match\n if [ \"${public_svc_cert}\" != \"${public_secret_cert}\" ]; then\n exit 1\n fi\n \n internal_svc_cert=$(oc rsh -n $NAMESPACE -c watcher-api watcher-kuttl-api-0 cat /etc/pki/tls/certs/internal.crt)\n \ internal_secret_cert=$(oc get -n $NAMESPACE secret cert-watcher-internal-svc -o jsonpath='{.data.tls\\.crt}' | base64 --decode)\n # ensure that the svc secret and cert secret match\n if [ \"${internal_svc_cert}\" != \"${internal_secret_cert}\" ]; then\n exit 1\n fi\n ]\n logger.go:42: 07:00:30 | watcher-tls-certs-change/1-deploy-with-tlse | ++ oc rsh -n watcher-kuttl-default -c watcher-api watcher-kuttl-api-0 cat /etc/pki/tls/certs/public.crt\n \ logger.go:42: 07:00:30 | watcher-tls-certs-change/1-deploy-with-tlse | + public_svc_cert='-----BEGIN CERTIFICATE-----\n logger.go:42: 07:00:30 | watcher-tls-certs-change/1-deploy-with-tlse | MIIDsjCCAhqgAwIBAgIQFVNmGwlW4BZq0NMjqP97cTANBgkqhkiG9w0BAQwFADAY\n logger.go:42: 07:00:30 | watcher-tls-certs-change/1-deploy-with-tlse | MRYwFAYDVQQDEw1yb290Y2EtcHVibGljMB4XDTI2MDEyMjA3MDAwN1oXDTMxMDEy\n \ logger.go:42: 07:00:30 | watcher-tls-certs-change/1-deploy-with-tlse | MTA3MDAwN1owADCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAMpNXlNH\n \ logger.go:42: 07:00:30 | watcher-tls-certs-change/1-deploy-with-tlse | +PgDkPuiPtY2Iy+iOVTneCp31gtcOAI8JIp8fgCvk7KwdimyxIUQ2cYxsZPnG0cy\n \ logger.go:42: 07:00:30 | watcher-tls-certs-change/1-deploy-with-tlse | wL4XpOQoqp0ElzP3DPP1FUOAnx7ctKdash5DUzBT+84oA2Hab8yd3D9RWVXgYTYP\n \ logger.go:42: 07:00:30 | watcher-tls-certs-change/1-deploy-with-tlse | tgSvfD60T1dJeiiJa5kBj0Gfj+0gKtYNueYrOo65hlC6Smd2tP/7av+xLtQ2b1JS\n \ logger.go:42: 07:00:30 | watcher-tls-certs-change/1-deploy-with-tlse | tFAWNznSY+2n+8SyIH6jjDb/oBYfANPOIcfU/4X8dTGzLgkpMMXK3KyXfzgD9EW+\n \ logger.go:42: 07:00:30 | watcher-tls-certs-change/1-deploy-with-tlse | HexV+da9g5Ix/I5gxku5bd0DWz/ovW+1o/uYMgB5ss9oqd8YKKSqkptXfAtoGToA\n \ logger.go:42: 07:00:30 | watcher-tls-certs-change/1-deploy-with-tlse | KG8QccenVwR7kEcCAwEAAaOBjzCBjDAOBgNVHQ8BAf8EBAMCBaAwEwYDVR0lBAww\n \ logger.go:42: 07:00:30 | watcher-tls-certs-change/1-deploy-with-tlse | CgYIKwYBBQUHAwEwDAYDVR0TAQH/BAIwADAfBgNVHSMEGDAWgBSy3xzAKuvmrfpX\n \ logger.go:42: 07:00:30 | watcher-tls-certs-change/1-deploy-with-tlse | EgmGJVm/ieLg8TA2BgNVHREBAf8ELDAqgih3YXRjaGVyLXB1YmxpYy53YXRjaGVy\n \ logger.go:42: 07:00:30 | watcher-tls-certs-change/1-deploy-with-tlse | LWt1dHRsLWRlZmF1bHQuc3ZjMA0GCSqGSIb3DQEBDAUAA4IBgQBmitaOVryjB1Fy\n \ logger.go:42: 07:00:30 | watcher-tls-certs-change/1-deploy-with-tlse | o+uQ7q1CfFEXu4uk57VffzLi6GFAot2u9gqfKm5Bx/JFKe0HvUHxfyum8oGhg8Mm\n \ logger.go:42: 07:00:30 | watcher-tls-certs-change/1-deploy-with-tlse | lF7bosLeepyWZeFYbXLZ3eCDrTyQRPPc8j6O8R9Ygg2wvIhwIrADK1/KTQuPVFW5\n \ logger.go:42: 07:00:30 | watcher-tls-certs-change/1-deploy-with-tlse | DVaC7MERErbmczp6DYiELdgcqlJimK3Vp/11qNtk1Kc7wVLI2Y36CzhwVMXOBW00\n \ logger.go:42: 07:00:30 | watcher-tls-certs-change/1-deploy-with-tlse | XXQxvYoWLH8WjhVMN9L0ZfTLdQIwL4gt0B3R7Uh8HR/qMCwW4RhIByG/el6wH1yh\n \ logger.go:42: 07:00:30 | watcher-tls-certs-change/1-deploy-with-tlse | D5gkY2Jkbf1vvmmw6MBLkx5O8aL145sW8fDKMGrmRj+eO5ALLeYMdmQZXqwZ4URw\n \ logger.go:42: 07:00:30 | watcher-tls-certs-change/1-deploy-with-tlse | X1tyjzN1ntrcBJf6xnuMLfT1v9RPMCT4CPeb23M5lqvwQXdrbiarkLQ9n/mBPUA6\n \ logger.go:42: 07:00:30 | watcher-tls-certs-change/1-deploy-with-tlse | lUYWBkFqYg9XnAgkWTeehQ7+AeOtvhj6gu3st6nEkOjJs9ri3Jris8kGL9Ay5yMy\n \ logger.go:42: 07:00:30 | watcher-tls-certs-change/1-deploy-with-tlse | af6ByIVzqJuVxr9GJiFIvhx+t2NEelQUiFnN1465Q67x/AELrfo=\n \ logger.go:42: 07:00:30 | watcher-tls-certs-change/1-deploy-with-tlse | -----END CERTIFICATE-----'\n logger.go:42: 07:00:30 | watcher-tls-certs-change/1-deploy-with-tlse | ++ base64 --decode\n logger.go:42: 07:00:30 | watcher-tls-certs-change/1-deploy-with-tlse | ++ oc get -n watcher-kuttl-default secret cert-watcher-public-svc -o 'jsonpath={.data.tls\\.crt}'\n \ logger.go:42: 07:00:30 | watcher-tls-certs-change/1-deploy-with-tlse | + public_secret_cert='-----BEGIN CERTIFICATE-----\n logger.go:42: 07:00:30 | watcher-tls-certs-change/1-deploy-with-tlse | MIIDsjCCAhqgAwIBAgIQFVNmGwlW4BZq0NMjqP97cTANBgkqhkiG9w0BAQwFADAY\n logger.go:42: 07:00:30 | watcher-tls-certs-change/1-deploy-with-tlse | MRYwFAYDVQQDEw1yb290Y2EtcHVibGljMB4XDTI2MDEyMjA3MDAwN1oXDTMxMDEy\n \ logger.go:42: 07:00:30 | watcher-tls-certs-change/1-deploy-with-tlse | MTA3MDAwN1owADCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAMpNXlNH\n \ logger.go:42: 07:00:30 | watcher-tls-certs-change/1-deploy-with-tlse | +PgDkPuiPtY2Iy+iOVTneCp31gtcOAI8JIp8fgCvk7KwdimyxIUQ2cYxsZPnG0cy\n \ logger.go:42: 07:00:30 | watcher-tls-certs-change/1-deploy-with-tlse | wL4XpOQoqp0ElzP3DPP1FUOAnx7ctKdash5DUzBT+84oA2Hab8yd3D9RWVXgYTYP\n \ logger.go:42: 07:00:30 | watcher-tls-certs-change/1-deploy-with-tlse | tgSvfD60T1dJeiiJa5kBj0Gfj+0gKtYNueYrOo65hlC6Smd2tP/7av+xLtQ2b1JS\n \ logger.go:42: 07:00:30 | watcher-tls-certs-change/1-deploy-with-tlse | tFAWNznSY+2n+8SyIH6jjDb/oBYfANPOIcfU/4X8dTGzLgkpMMXK3KyXfzgD9EW+\n \ logger.go:42: 07:00:30 | watcher-tls-certs-change/1-deploy-with-tlse | HexV+da9g5Ix/I5gxku5bd0DWz/ovW+1o/uYMgB5ss9oqd8YKKSqkptXfAtoGToA\n \ logger.go:42: 07:00:30 | watcher-tls-certs-change/1-deploy-with-tlse | KG8QccenVwR7kEcCAwEAAaOBjzCBjDAOBgNVHQ8BAf8EBAMCBaAwEwYDVR0lBAww\n \ logger.go:42: 07:00:30 | watcher-tls-certs-change/1-deploy-with-tlse | CgYIKwYBBQUHAwEwDAYDVR0TAQH/BAIwADAfBgNVHSMEGDAWgBSy3xzAKuvmrfpX\n \ logger.go:42: 07:00:30 | watcher-tls-certs-change/1-deploy-with-tlse | EgmGJVm/ieLg8TA2BgNVHREBAf8ELDAqgih3YXRjaGVyLXB1YmxpYy53YXRjaGVy\n \ logger.go:42: 07:00:30 | watcher-tls-certs-change/1-deploy-with-tlse | LWt1dHRsLWRlZmF1bHQuc3ZjMA0GCSqGSIb3DQEBDAUAA4IBgQBmitaOVryjB1Fy\n \ logger.go:42: 07:00:30 | watcher-tls-certs-change/1-deploy-with-tlse | o+uQ7q1CfFEXu4uk57VffzLi6GFAot2u9gqfKm5Bx/JFKe0HvUHxfyum8oGhg8Mm\n \ logger.go:42: 07:00:30 | watcher-tls-certs-change/1-deploy-with-tlse | lF7bosLeepyWZeFYbXLZ3eCDrTyQRPPc8j6O8R9Ygg2wvIhwIrADK1/KTQuPVFW5\n \ logger.go:42: 07:00:30 | watcher-tls-certs-change/1-deploy-with-tlse | DVaC7MERErbmczp6DYiELdgcqlJimK3Vp/11qNtk1Kc7wVLI2Y36CzhwVMXOBW00\n \ logger.go:42: 07:00:30 | watcher-tls-certs-change/1-deploy-with-tlse | XXQxvYoWLH8WjhVMN9L0ZfTLdQIwL4gt0B3R7Uh8HR/qMCwW4RhIByG/el6wH1yh\n \ logger.go:42: 07:00:30 | watcher-tls-certs-change/1-deploy-with-tlse | D5gkY2Jkbf1vvmmw6MBLkx5O8aL145sW8fDKMGrmRj+eO5ALLeYMdmQZXqwZ4URw\n \ logger.go:42: 07:00:30 | watcher-tls-certs-change/1-deploy-with-tlse | X1tyjzN1ntrcBJf6xnuMLfT1v9RPMCT4CPeb23M5lqvwQXdrbiarkLQ9n/mBPUA6\n \ logger.go:42: 07:00:30 | watcher-tls-certs-change/1-deploy-with-tlse | lUYWBkFqYg9XnAgkWTeehQ7+AeOtvhj6gu3st6nEkOjJs9ri3Jris8kGL9Ay5yMy\n \ logger.go:42: 07:00:30 | watcher-tls-certs-change/1-deploy-with-tlse | af6ByIVzqJuVxr9GJiFIvhx+t2NEelQUiFnN1465Q67x/AELrfo=\n \ logger.go:42: 07:00:30 | watcher-tls-certs-change/1-deploy-with-tlse | -----END CERTIFICATE-----'\n logger.go:42: 07:00:30 | watcher-tls-certs-change/1-deploy-with-tlse | + '[' '-----BEGIN CERTIFICATE-----\n logger.go:42: 07:00:30 | watcher-tls-certs-change/1-deploy-with-tlse | MIIDsjCCAhqgAwIBAgIQFVNmGwlW4BZq0NMjqP97cTANBgkqhkiG9w0BAQwFADAY\n logger.go:42: 07:00:30 | watcher-tls-certs-change/1-deploy-with-tlse | MRYwFAYDVQQDEw1yb290Y2EtcHVibGljMB4XDTI2MDEyMjA3MDAwN1oXDTMxMDEy\n \ logger.go:42: 07:00:30 | watcher-tls-certs-change/1-deploy-with-tlse | MTA3MDAwN1owADCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAMpNXlNH\n \ logger.go:42: 07:00:30 | watcher-tls-certs-change/1-deploy-with-tlse | +PgDkPuiPtY2Iy+iOVTneCp31gtcOAI8JIp8fgCvk7KwdimyxIUQ2cYxsZPnG0cy\n \ logger.go:42: 07:00:30 | watcher-tls-certs-change/1-deploy-with-tlse | wL4XpOQoqp0ElzP3DPP1FUOAnx7ctKdash5DUzBT+84oA2Hab8yd3D9RWVXgYTYP\n \ logger.go:42: 07:00:30 | watcher-tls-certs-change/1-deploy-with-tlse | tgSvfD60T1dJeiiJa5kBj0Gfj+0gKtYNueYrOo65hlC6Smd2tP/7av+xLtQ2b1JS\n \ logger.go:42: 07:00:30 | watcher-tls-certs-change/1-deploy-with-tlse | tFAWNznSY+2n+8SyIH6jjDb/oBYfANPOIcfU/4X8dTGzLgkpMMXK3KyXfzgD9EW+\n \ logger.go:42: 07:00:30 | watcher-tls-certs-change/1-deploy-with-tlse | HexV+da9g5Ix/I5gxku5bd0DWz/ovW+1o/uYMgB5ss9oqd8YKKSqkptXfAtoGToA\n \ logger.go:42: 07:00:30 | watcher-tls-certs-change/1-deploy-with-tlse | KG8QccenVwR7kEcCAwEAAaOBjzCBjDAOBgNVHQ8BAf8EBAMCBaAwEwYDVR0lBAww\n \ logger.go:42: 07:00:30 | watcher-tls-certs-change/1-deploy-with-tlse | CgYIKwYBBQUHAwEwDAYDVR0TAQH/BAIwADAfBgNVHSMEGDAWgBSy3xzAKuvmrfpX\n \ logger.go:42: 07:00:30 | watcher-tls-certs-change/1-deploy-with-tlse | EgmGJVm/ieLg8TA2BgNVHREBAf8ELDAqgih3YXRjaGVyLXB1YmxpYy53YXRjaGVy\n \ logger.go:42: 07:00:30 | watcher-tls-certs-change/1-deploy-with-tlse | LWt1dHRsLWRlZmF1bHQuc3ZjMA0GCSqGSIb3DQEBDAUAA4IBgQBmitaOVryjB1Fy\n \ logger.go:42: 07:00:30 | watcher-tls-certs-change/1-deploy-with-tlse | o+uQ7q1CfFEXu4uk57VffzLi6GFAot2u9gqfKm5Bx/JFKe0HvUHxfyum8oGhg8Mm\n \ logger.go:42: 07:00:30 | watcher-tls-certs-change/1-deploy-with-tlse | lF7bosLeepyWZeFYbXLZ3eCDrTyQRPPc8j6O8R9Ygg2wvIhwIrADK1/KTQuPVFW5\n \ logger.go:42: 07:00:30 | watcher-tls-certs-change/1-deploy-with-tlse | DVaC7MERErbmczp6DYiELdgcqlJimK3Vp/11qNtk1Kc7wVLI2Y36CzhwVMXOBW00\n \ logger.go:42: 07:00:30 | watcher-tls-certs-change/1-deploy-with-tlse | XXQxvYoWLH8WjhVMN9L0ZfTLdQIwL4gt0B3R7Uh8HR/qMCwW4RhIByG/el6wH1yh\n \ logger.go:42: 07:00:30 | watcher-tls-certs-change/1-deploy-with-tlse | D5gkY2Jkbf1vvmmw6MBLkx5O8aL145sW8fDKMGrmRj+eO5ALLeYMdmQZXqwZ4URw\n \ logger.go:42: 07:00:30 | watcher-tls-certs-change/1-deploy-with-tlse | X1tyjzN1ntrcBJf6xnuMLfT1v9RPMCT4CPeb23M5lqvwQXdrbiarkLQ9n/mBPUA6\n \ logger.go:42: 07:00:30 | watcher-tls-certs-change/1-deploy-with-tlse | lUYWBkFqYg9XnAgkWTeehQ7+AeOtvhj6gu3st6nEkOjJs9ri3Jris8kGL9Ay5yMy\n \ logger.go:42: 07:00:30 | watcher-tls-certs-change/1-deploy-with-tlse | af6ByIVzqJuVxr9GJiFIvhx+t2NEelQUiFnN1465Q67x/AELrfo=\n \ logger.go:42: 07:00:30 | watcher-tls-certs-change/1-deploy-with-tlse | -----END CERTIFICATE-----' '!=' '-----BEGIN CERTIFICATE-----\n logger.go:42: 07:00:30 | watcher-tls-certs-change/1-deploy-with-tlse | MIIDsjCCAhqgAwIBAgIQFVNmGwlW4BZq0NMjqP97cTANBgkqhkiG9w0BAQwFADAY\n \ logger.go:42: 07:00:30 | watcher-tls-certs-change/1-deploy-with-tlse | MRYwFAYDVQQDEw1yb290Y2EtcHVibGljMB4XDTI2MDEyMjA3MDAwN1oXDTMxMDEy\n \ logger.go:42: 07:00:30 | watcher-tls-certs-change/1-deploy-with-tlse | MTA3MDAwN1owADCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAMpNXlNH\n \ logger.go:42: 07:00:30 | watcher-tls-certs-change/1-deploy-with-tlse | +PgDkPuiPtY2Iy+iOVTneCp31gtcOAI8JIp8fgCvk7KwdimyxIUQ2cYxsZPnG0cy\n \ logger.go:42: 07:00:30 | watcher-tls-certs-change/1-deploy-with-tlse | wL4XpOQoqp0ElzP3DPP1FUOAnx7ctKdash5DUzBT+84oA2Hab8yd3D9RWVXgYTYP\n \ logger.go:42: 07:00:30 | watcher-tls-certs-change/1-deploy-with-tlse | tgSvfD60T1dJeiiJa5kBj0Gfj+0gKtYNueYrOo65hlC6Smd2tP/7av+xLtQ2b1JS\n \ logger.go:42: 07:00:30 | watcher-tls-certs-change/1-deploy-with-tlse | tFAWNznSY+2n+8SyIH6jjDb/oBYfANPOIcfU/4X8dTGzLgkpMMXK3KyXfzgD9EW+\n \ logger.go:42: 07:00:30 | watcher-tls-certs-change/1-deploy-with-tlse | HexV+da9g5Ix/I5gxku5bd0DWz/ovW+1o/uYMgB5ss9oqd8YKKSqkptXfAtoGToA\n \ logger.go:42: 07:00:30 | watcher-tls-certs-change/1-deploy-with-tlse | KG8QccenVwR7kEcCAwEAAaOBjzCBjDAOBgNVHQ8BAf8EBAMCBaAwEwYDVR0lBAww\n \ logger.go:42: 07:00:30 | watcher-tls-certs-change/1-deploy-with-tlse | CgYIKwYBBQUHAwEwDAYDVR0TAQH/BAIwADAfBgNVHSMEGDAWgBSy3xzAKuvmrfpX\n \ logger.go:42: 07:00:30 | watcher-tls-certs-change/1-deploy-with-tlse | EgmGJVm/ieLg8TA2BgNVHREBAf8ELDAqgih3YXRjaGVyLXB1YmxpYy53YXRjaGVy\n \ logger.go:42: 07:00:30 | watcher-tls-certs-change/1-deploy-with-tlse | LWt1dHRsLWRlZmF1bHQuc3ZjMA0GCSqGSIb3DQEBDAUAA4IBgQBmitaOVryjB1Fy\n \ logger.go:42: 07:00:30 | watcher-tls-certs-change/1-deploy-with-tlse | o+uQ7q1CfFEXu4uk57VffzLi6GFAot2u9gqfKm5Bx/JFKe0HvUHxfyum8oGhg8Mm\n \ logger.go:42: 07:00:30 | watcher-tls-certs-change/1-deploy-with-tlse | lF7bosLeepyWZeFYbXLZ3eCDrTyQRPPc8j6O8R9Ygg2wvIhwIrADK1/KTQuPVFW5\n \ logger.go:42: 07:00:30 | watcher-tls-certs-change/1-deploy-with-tlse | DVaC7MERErbmczp6DYiELdgcqlJimK3Vp/11qNtk1Kc7wVLI2Y36CzhwVMXOBW00\n \ logger.go:42: 07:00:30 | watcher-tls-certs-change/1-deploy-with-tlse | XXQxvYoWLH8WjhVMN9L0ZfTLdQIwL4gt0B3R7Uh8HR/qMCwW4RhIByG/el6wH1yh\n \ logger.go:42: 07:00:30 | watcher-tls-certs-change/1-deploy-with-tlse | D5gkY2Jkbf1vvmmw6MBLkx5O8aL145sW8fDKMGrmRj+eO5ALLeYMdmQZXqwZ4URw\n \ logger.go:42: 07:00:30 | watcher-tls-certs-change/1-deploy-with-tlse | X1tyjzN1ntrcBJf6xnuMLfT1v9RPMCT4CPeb23M5lqvwQXdrbiarkLQ9n/mBPUA6\n \ logger.go:42: 07:00:30 | watcher-tls-certs-change/1-deploy-with-tlse | lUYWBkFqYg9XnAgkWTeehQ7+AeOtvhj6gu3st6nEkOjJs9ri3Jris8kGL9Ay5yMy\n \ logger.go:42: 07:00:30 | watcher-tls-certs-change/1-deploy-with-tlse | af6ByIVzqJuVxr9GJiFIvhx+t2NEelQUiFnN1465Q67x/AELrfo=\n \ logger.go:42: 07:00:30 | watcher-tls-certs-change/1-deploy-with-tlse | -----END CERTIFICATE-----' ']'\n logger.go:42: 07:00:30 | watcher-tls-certs-change/1-deploy-with-tlse | ++ oc rsh -n watcher-kuttl-default -c watcher-api watcher-kuttl-api-0 cat /etc/pki/tls/certs/internal.crt\n \ logger.go:42: 07:00:31 | watcher-tls-certs-change/1-deploy-with-tlse | + internal_svc_cert='-----BEGIN CERTIFICATE-----\n logger.go:42: 07:00:31 | watcher-tls-certs-change/1-deploy-with-tlse | MIIDtzCCAh+gAwIBAgIRAMdtA66/UVErgQ6jB2Mo7g8wDQYJKoZIhvcNAQEMBQAw\n logger.go:42: 07:00:31 | watcher-tls-certs-change/1-deploy-with-tlse | GjEYMBYGA1UEAxMPcm9vdGNhLWludGVybmFsMB4XDTI2MDEyMjA3MDAwN1oXDTMx\n \ logger.go:42: 07:00:31 | watcher-tls-certs-change/1-deploy-with-tlse | MDEyMTA3MDAwN1owADCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAOzY\n \ logger.go:42: 07:00:31 | watcher-tls-certs-change/1-deploy-with-tlse | hUVeRLlVB60sJbdpaQAJI2vpbeEEDa91MsWoYlWF58nvtb17k8KGvpPNemDF4UvD\n \ logger.go:42: 07:00:31 | watcher-tls-certs-change/1-deploy-with-tlse | kRm+oSlbUhWwlbwI1Q3VFz4JhvU12PvdUhV4e5PbNGULUDGpAbr04w098HEQSUj+\n \ logger.go:42: 07:00:31 | watcher-tls-certs-change/1-deploy-with-tlse | D/44rQFA0PbvLa//wISJcF6NG2kZiiAFAlrfYJH5tJM2nbdDOvHRY2FScNwLw3aE\n \ logger.go:42: 07:00:31 | watcher-tls-certs-change/1-deploy-with-tlse | 1UGIp9tmMA1on0l+9tBcndNzSq8klq5TdwDo8Pwrqz8m37ZXvwD1gizzDXwZhiPN\n \ logger.go:42: 07:00:31 | watcher-tls-certs-change/1-deploy-with-tlse | KIvLKp0hAxvswvj5Rib80zGQX6JjJEcOBFZ4GT/e8WUjrsJ6JsTygAtF9Y8xwchO\n \ logger.go:42: 07:00:31 | watcher-tls-certs-change/1-deploy-with-tlse | Ce4xdKeeGZ8hDK2ojFcCAwEAAaOBkTCBjjAOBgNVHQ8BAf8EBAMCBaAwEwYDVR0l\n \ logger.go:42: 07:00:31 | watcher-tls-certs-change/1-deploy-with-tlse | BAwwCgYIKwYBBQUHAwEwDAYDVR0TAQH/BAIwADAfBgNVHSMEGDAWgBSEvgxWYqXo\n \ logger.go:42: 07:00:31 | watcher-tls-certs-change/1-deploy-with-tlse | VaasRUQLJzZAsO/HJTA4BgNVHREBAf8ELjAsgip3YXRjaGVyLWludGVybmFsLndh\n \ logger.go:42: 07:00:31 | watcher-tls-certs-change/1-deploy-with-tlse | dGNoZXIta3V0dGwtZGVmYXVsdC5zdmMwDQYJKoZIhvcNAQEMBQADggGBAJzAl8kI\n \ logger.go:42: 07:00:31 | watcher-tls-certs-change/1-deploy-with-tlse | fRCmnFBnPIq9+3XfefG0pybCCtUusBuoATJYDHmH/379Rb4zeN6bqXg7PP36qVao\n \ logger.go:42: 07:00:31 | watcher-tls-certs-change/1-deploy-with-tlse | 98lAd5LFEieOBPJq2+QXo8CWrLcbItzbOHdFeVStfpuQqaD2OMYLLepP/IpAKT+0\n \ logger.go:42: 07:00:31 | watcher-tls-certs-change/1-deploy-with-tlse | +GEHqjJsaHF3xj6g5HcvQzhSaQh3Pg8ffKmqJKQg95PF1BPQ110MawKYh9MBibPr\n \ logger.go:42: 07:00:31 | watcher-tls-certs-change/1-deploy-with-tlse | b0myrALIel2CeIeWqm+VMrMI8en/gFxJWTfg58GcQbRiUDzWa2BHE1mEoXmOLNDt\n \ logger.go:42: 07:00:31 | watcher-tls-certs-change/1-deploy-with-tlse | L1iYWnFugdfAycs2+uAkPMbpav3Oi3WJX1N1iKRklNv5Cq8oQ++b8aoR2RZPnJJE\n \ logger.go:42: 07:00:31 | watcher-tls-certs-change/1-deploy-with-tlse | ppmkEBU4Kt94NO5B/A2PnjkM8+yOQjhjVnyD5Ccp8kHx0MN7H1OW9J0m4JEPjP/c\n \ logger.go:42: 07:00:31 | watcher-tls-certs-change/1-deploy-with-tlse | uRTxWb4yoJl6Igdfy3tYgldteKztKgf4tDlkR/Htsf8GVL+zbsnTseYY5BakyygP\n \ logger.go:42: 07:00:31 | watcher-tls-certs-change/1-deploy-with-tlse | AHnxS0D1QCA5Zv5craVd+a2ZCNvbnSdl4pwShL796A0cNZGnAKDD1LwcYQ==\n \ logger.go:42: 07:00:31 | watcher-tls-certs-change/1-deploy-with-tlse | -----END CERTIFICATE-----'\n logger.go:42: 07:00:31 | watcher-tls-certs-change/1-deploy-with-tlse | ++ oc get -n watcher-kuttl-default secret cert-watcher-internal-svc -o 'jsonpath={.data.tls\\.crt}'\n \ logger.go:42: 07:00:31 | watcher-tls-certs-change/1-deploy-with-tlse | ++ base64 --decode\n logger.go:42: 07:00:31 | watcher-tls-certs-change/1-deploy-with-tlse | + internal_secret_cert='-----BEGIN CERTIFICATE-----\n logger.go:42: 07:00:31 | watcher-tls-certs-change/1-deploy-with-tlse | MIIDtzCCAh+gAwIBAgIRAMdtA66/UVErgQ6jB2Mo7g8wDQYJKoZIhvcNAQEMBQAw\n \ logger.go:42: 07:00:31 | watcher-tls-certs-change/1-deploy-with-tlse | GjEYMBYGA1UEAxMPcm9vdGNhLWludGVybmFsMB4XDTI2MDEyMjA3MDAwN1oXDTMx\n \ logger.go:42: 07:00:31 | watcher-tls-certs-change/1-deploy-with-tlse | MDEyMTA3MDAwN1owADCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAOzY\n \ logger.go:42: 07:00:31 | watcher-tls-certs-change/1-deploy-with-tlse | hUVeRLlVB60sJbdpaQAJI2vpbeEEDa91MsWoYlWF58nvtb17k8KGvpPNemDF4UvD\n \ logger.go:42: 07:00:31 | watcher-tls-certs-change/1-deploy-with-tlse | kRm+oSlbUhWwlbwI1Q3VFz4JhvU12PvdUhV4e5PbNGULUDGpAbr04w098HEQSUj+\n \ logger.go:42: 07:00:31 | watcher-tls-certs-change/1-deploy-with-tlse | D/44rQFA0PbvLa//wISJcF6NG2kZiiAFAlrfYJH5tJM2nbdDOvHRY2FScNwLw3aE\n \ logger.go:42: 07:00:31 | watcher-tls-certs-change/1-deploy-with-tlse | 1UGIp9tmMA1on0l+9tBcndNzSq8klq5TdwDo8Pwrqz8m37ZXvwD1gizzDXwZhiPN\n \ logger.go:42: 07:00:31 | watcher-tls-certs-change/1-deploy-with-tlse | KIvLKp0hAxvswvj5Rib80zGQX6JjJEcOBFZ4GT/e8WUjrsJ6JsTygAtF9Y8xwchO\n \ logger.go:42: 07:00:31 | watcher-tls-certs-change/1-deploy-with-tlse | Ce4xdKeeGZ8hDK2ojFcCAwEAAaOBkTCBjjAOBgNVHQ8BAf8EBAMCBaAwEwYDVR0l\n \ logger.go:42: 07:00:31 | watcher-tls-certs-change/1-deploy-with-tlse | BAwwCgYIKwYBBQUHAwEwDAYDVR0TAQH/BAIwADAfBgNVHSMEGDAWgBSEvgxWYqXo\n \ logger.go:42: 07:00:31 | watcher-tls-certs-change/1-deploy-with-tlse | VaasRUQLJzZAsO/HJTA4BgNVHREBAf8ELjAsgip3YXRjaGVyLWludGVybmFsLndh\n \ logger.go:42: 07:00:31 | watcher-tls-certs-change/1-deploy-with-tlse | dGNoZXIta3V0dGwtZGVmYXVsdC5zdmMwDQYJKoZIhvcNAQEMBQADggGBAJzAl8kI\n \ logger.go:42: 07:00:31 | watcher-tls-certs-change/1-deploy-with-tlse | fRCmnFBnPIq9+3XfefG0pybCCtUusBuoATJYDHmH/379Rb4zeN6bqXg7PP36qVao\n \ logger.go:42: 07:00:31 | watcher-tls-certs-change/1-deploy-with-tlse | 98lAd5LFEieOBPJq2+QXo8CWrLcbItzbOHdFeVStfpuQqaD2OMYLLepP/IpAKT+0\n \ logger.go:42: 07:00:31 | watcher-tls-certs-change/1-deploy-with-tlse | +GEHqjJsaHF3xj6g5HcvQzhSaQh3Pg8ffKmqJKQg95PF1BPQ110MawKYh9MBibPr\n \ logger.go:42: 07:00:31 | watcher-tls-certs-change/1-deploy-with-tlse | b0myrALIel2CeIeWqm+VMrMI8en/gFxJWTfg58GcQbRiUDzWa2BHE1mEoXmOLNDt\n \ logger.go:42: 07:00:31 | watcher-tls-certs-change/1-deploy-with-tlse | L1iYWnFugdfAycs2+uAkPMbpav3Oi3WJX1N1iKRklNv5Cq8oQ++b8aoR2RZPnJJE\n \ logger.go:42: 07:00:31 | watcher-tls-certs-change/1-deploy-with-tlse | ppmkEBU4Kt94NO5B/A2PnjkM8+yOQjhjVnyD5Ccp8kHx0MN7H1OW9J0m4JEPjP/c\n \ logger.go:42: 07:00:31 | watcher-tls-certs-change/1-deploy-with-tlse | uRTxWb4yoJl6Igdfy3tYgldteKztKgf4tDlkR/Htsf8GVL+zbsnTseYY5BakyygP\n \ logger.go:42: 07:00:31 | watcher-tls-certs-change/1-deploy-with-tlse | AHnxS0D1QCA5Zv5craVd+a2ZCNvbnSdl4pwShL796A0cNZGnAKDD1LwcYQ==\n \ logger.go:42: 07:00:31 | watcher-tls-certs-change/1-deploy-with-tlse | -----END CERTIFICATE-----'\n logger.go:42: 07:00:31 | watcher-tls-certs-change/1-deploy-with-tlse | + '[' '-----BEGIN CERTIFICATE-----\n logger.go:42: 07:00:31 | watcher-tls-certs-change/1-deploy-with-tlse | MIIDtzCCAh+gAwIBAgIRAMdtA66/UVErgQ6jB2Mo7g8wDQYJKoZIhvcNAQEMBQAw\n logger.go:42: 07:00:31 | watcher-tls-certs-change/1-deploy-with-tlse | GjEYMBYGA1UEAxMPcm9vdGNhLWludGVybmFsMB4XDTI2MDEyMjA3MDAwN1oXDTMx\n \ logger.go:42: 07:00:31 | watcher-tls-certs-change/1-deploy-with-tlse | MDEyMTA3MDAwN1owADCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAOzY\n \ logger.go:42: 07:00:31 | watcher-tls-certs-change/1-deploy-with-tlse | hUVeRLlVB60sJbdpaQAJI2vpbeEEDa91MsWoYlWF58nvtb17k8KGvpPNemDF4UvD\n \ logger.go:42: 07:00:31 | watcher-tls-certs-change/1-deploy-with-tlse | kRm+oSlbUhWwlbwI1Q3VFz4JhvU12PvdUhV4e5PbNGULUDGpAbr04w098HEQSUj+\n \ logger.go:42: 07:00:31 | watcher-tls-certs-change/1-deploy-with-tlse | D/44rQFA0PbvLa//wISJcF6NG2kZiiAFAlrfYJH5tJM2nbdDOvHRY2FScNwLw3aE\n \ logger.go:42: 07:00:31 | watcher-tls-certs-change/1-deploy-with-tlse | 1UGIp9tmMA1on0l+9tBcndNzSq8klq5TdwDo8Pwrqz8m37ZXvwD1gizzDXwZhiPN\n \ logger.go:42: 07:00:31 | watcher-tls-certs-change/1-deploy-with-tlse | KIvLKp0hAxvswvj5Rib80zGQX6JjJEcOBFZ4GT/e8WUjrsJ6JsTygAtF9Y8xwchO\n \ logger.go:42: 07:00:31 | watcher-tls-certs-change/1-deploy-with-tlse | Ce4xdKeeGZ8hDK2ojFcCAwEAAaOBkTCBjjAOBgNVHQ8BAf8EBAMCBaAwEwYDVR0l\n \ logger.go:42: 07:00:31 | watcher-tls-certs-change/1-deploy-with-tlse | BAwwCgYIKwYBBQUHAwEwDAYDVR0TAQH/BAIwADAfBgNVHSMEGDAWgBSEvgxWYqXo\n \ logger.go:42: 07:00:31 | watcher-tls-certs-change/1-deploy-with-tlse | VaasRUQLJzZAsO/HJTA4BgNVHREBAf8ELjAsgip3YXRjaGVyLWludGVybmFsLndh\n \ logger.go:42: 07:00:31 | watcher-tls-certs-change/1-deploy-with-tlse | dGNoZXIta3V0dGwtZGVmYXVsdC5zdmMwDQYJKoZIhvcNAQEMBQADggGBAJzAl8kI\n \ logger.go:42: 07:00:31 | watcher-tls-certs-change/1-deploy-with-tlse | fRCmnFBnPIq9+3XfefG0pybCCtUusBuoATJYDHmH/379Rb4zeN6bqXg7PP36qVao\n \ logger.go:42: 07:00:31 | watcher-tls-certs-change/1-deploy-with-tlse | 98lAd5LFEieOBPJq2+QXo8CWrLcbItzbOHdFeVStfpuQqaD2OMYLLepP/IpAKT+0\n \ logger.go:42: 07:00:31 | watcher-tls-certs-change/1-deploy-with-tlse | +GEHqjJsaHF3xj6g5HcvQzhSaQh3Pg8ffKmqJKQg95PF1BPQ110MawKYh9MBibPr\n \ logger.go:42: 07:00:31 | watcher-tls-certs-change/1-deploy-with-tlse | b0myrALIel2CeIeWqm+VMrMI8en/gFxJWTfg58GcQbRiUDzWa2BHE1mEoXmOLNDt\n \ logger.go:42: 07:00:31 | watcher-tls-certs-change/1-deploy-with-tlse | L1iYWnFugdfAycs2+uAkPMbpav3Oi3WJX1N1iKRklNv5Cq8oQ++b8aoR2RZPnJJE\n \ logger.go:42: 07:00:31 | watcher-tls-certs-change/1-deploy-with-tlse | ppmkEBU4Kt94NO5B/A2PnjkM8+yOQjhjVnyD5Ccp8kHx0MN7H1OW9J0m4JEPjP/c\n \ logger.go:42: 07:00:31 | watcher-tls-certs-change/1-deploy-with-tlse | uRTxWb4yoJl6Igdfy3tYgldteKztKgf4tDlkR/Htsf8GVL+zbsnTseYY5BakyygP\n \ logger.go:42: 07:00:31 | watcher-tls-certs-change/1-deploy-with-tlse | AHnxS0D1QCA5Zv5craVd+a2ZCNvbnSdl4pwShL796A0cNZGnAKDD1LwcYQ==\n \ logger.go:42: 07:00:31 | watcher-tls-certs-change/1-deploy-with-tlse | -----END CERTIFICATE-----' '!=' '-----BEGIN CERTIFICATE-----\n logger.go:42: 07:00:31 | watcher-tls-certs-change/1-deploy-with-tlse | MIIDtzCCAh+gAwIBAgIRAMdtA66/UVErgQ6jB2Mo7g8wDQYJKoZIhvcNAQEMBQAw\n \ logger.go:42: 07:00:31 | watcher-tls-certs-change/1-deploy-with-tlse | GjEYMBYGA1UEAxMPcm9vdGNhLWludGVybmFsMB4XDTI2MDEyMjA3MDAwN1oXDTMx\n \ logger.go:42: 07:00:31 | watcher-tls-certs-change/1-deploy-with-tlse | MDEyMTA3MDAwN1owADCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAOzY\n \ logger.go:42: 07:00:31 | watcher-tls-certs-change/1-deploy-with-tlse | hUVeRLlVB60sJbdpaQAJI2vpbeEEDa91MsWoYlWF58nvtb17k8KGvpPNemDF4UvD\n \ logger.go:42: 07:00:31 | watcher-tls-certs-change/1-deploy-with-tlse | kRm+oSlbUhWwlbwI1Q3VFz4JhvU12PvdUhV4e5PbNGULUDGpAbr04w098HEQSUj+\n \ logger.go:42: 07:00:31 | watcher-tls-certs-change/1-deploy-with-tlse | D/44rQFA0PbvLa//wISJcF6NG2kZiiAFAlrfYJH5tJM2nbdDOvHRY2FScNwLw3aE\n \ logger.go:42: 07:00:31 | watcher-tls-certs-change/1-deploy-with-tlse | 1UGIp9tmMA1on0l+9tBcndNzSq8klq5TdwDo8Pwrqz8m37ZXvwD1gizzDXwZhiPN\n \ logger.go:42: 07:00:31 | watcher-tls-certs-change/1-deploy-with-tlse | KIvLKp0hAxvswvj5Rib80zGQX6JjJEcOBFZ4GT/e8WUjrsJ6JsTygAtF9Y8xwchO\n \ logger.go:42: 07:00:31 | watcher-tls-certs-change/1-deploy-with-tlse | Ce4xdKeeGZ8hDK2ojFcCAwEAAaOBkTCBjjAOBgNVHQ8BAf8EBAMCBaAwEwYDVR0l\n \ logger.go:42: 07:00:31 | watcher-tls-certs-change/1-deploy-with-tlse | BAwwCgYIKwYBBQUHAwEwDAYDVR0TAQH/BAIwADAfBgNVHSMEGDAWgBSEvgxWYqXo\n \ logger.go:42: 07:00:31 | watcher-tls-certs-change/1-deploy-with-tlse | VaasRUQLJzZAsO/HJTA4BgNVHREBAf8ELjAsgip3YXRjaGVyLWludGVybmFsLndh\n \ logger.go:42: 07:00:31 | watcher-tls-certs-change/1-deploy-with-tlse | dGNoZXIta3V0dGwtZGVmYXVsdC5zdmMwDQYJKoZIhvcNAQEMBQADggGBAJzAl8kI\n \ logger.go:42: 07:00:31 | watcher-tls-certs-change/1-deploy-with-tlse | fRCmnFBnPIq9+3XfefG0pybCCtUusBuoATJYDHmH/379Rb4zeN6bqXg7PP36qVao\n \ logger.go:42: 07:00:31 | watcher-tls-certs-change/1-deploy-with-tlse | 98lAd5LFEieOBPJq2+QXo8CWrLcbItzbOHdFeVStfpuQqaD2OMYLLepP/IpAKT+0\n \ logger.go:42: 07:00:31 | watcher-tls-certs-change/1-deploy-with-tlse | +GEHqjJsaHF3xj6g5HcvQzhSaQh3Pg8ffKmqJKQg95PF1BPQ110MawKYh9MBibPr\n \ logger.go:42: 07:00:31 | watcher-tls-certs-change/1-deploy-with-tlse | b0myrALIel2CeIeWqm+VMrMI8en/gFxJWTfg58GcQbRiUDzWa2BHE1mEoXmOLNDt\n \ logger.go:42: 07:00:31 | watcher-tls-certs-change/1-deploy-with-tlse | L1iYWnFugdfAycs2+uAkPMbpav3Oi3WJX1N1iKRklNv5Cq8oQ++b8aoR2RZPnJJE\n \ logger.go:42: 07:00:31 | watcher-tls-certs-change/1-deploy-with-tlse | ppmkEBU4Kt94NO5B/A2PnjkM8+yOQjhjVnyD5Ccp8kHx0MN7H1OW9J0m4JEPjP/c\n \ logger.go:42: 07:00:31 | watcher-tls-certs-change/1-deploy-with-tlse | uRTxWb4yoJl6Igdfy3tYgldteKztKgf4tDlkR/Htsf8GVL+zbsnTseYY5BakyygP\n \ logger.go:42: 07:00:31 | watcher-tls-certs-change/1-deploy-with-tlse | AHnxS0D1QCA5Zv5craVd+a2ZCNvbnSdl4pwShL796A0cNZGnAKDD1LwcYQ==\n \ logger.go:42: 07:00:31 | watcher-tls-certs-change/1-deploy-with-tlse | -----END CERTIFICATE-----' ']'\n logger.go:42: 07:00:32 | watcher-tls-certs-change/1-deploy-with-tlse | running command: [sh -c set -euxo pipefail\n public_svc_cert=$(oc rsh -n $NAMESPACE -c watcher-api watcher-kuttl-api-0 cat /etc/pki/tls/certs/public.crt)\n \ public_secret_cert=$(oc get -n $NAMESPACE secret cert-watcher-public-svc -o jsonpath='{.data.tls\\.crt}' | base64 --decode)\n # ensure that the svc secret and cert secret match\n if [ \"${public_svc_cert}\" != \"${public_secret_cert}\" ]; then\n exit 1\n fi\n \n internal_svc_cert=$(oc rsh -n $NAMESPACE -c watcher-api watcher-kuttl-api-0 cat /etc/pki/tls/certs/internal.crt)\n \ internal_secret_cert=$(oc get -n $NAMESPACE secret cert-watcher-internal-svc -o jsonpath='{.data.tls\\.crt}' | base64 --decode)\n # ensure that the svc secret and cert secret match\n if [ \"${internal_svc_cert}\" != \"${internal_secret_cert}\" ]; then\n exit 1\n fi\n ]\n logger.go:42: 07:00:32 | watcher-tls-certs-change/1-deploy-with-tlse | ++ oc rsh -n watcher-kuttl-default -c watcher-api watcher-kuttl-api-0 cat /etc/pki/tls/certs/public.crt\n \ logger.go:42: 07:00:32 | watcher-tls-certs-change/1-deploy-with-tlse | + public_svc_cert='-----BEGIN CERTIFICATE-----\n logger.go:42: 07:00:32 | watcher-tls-certs-change/1-deploy-with-tlse | MIIDsjCCAhqgAwIBAgIQFVNmGwlW4BZq0NMjqP97cTANBgkqhkiG9w0BAQwFADAY\n logger.go:42: 07:00:32 | watcher-tls-certs-change/1-deploy-with-tlse | MRYwFAYDVQQDEw1yb290Y2EtcHVibGljMB4XDTI2MDEyMjA3MDAwN1oXDTMxMDEy\n \ logger.go:42: 07:00:32 | watcher-tls-certs-change/1-deploy-with-tlse | MTA3MDAwN1owADCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAMpNXlNH\n \ logger.go:42: 07:00:32 | watcher-tls-certs-change/1-deploy-with-tlse | +PgDkPuiPtY2Iy+iOVTneCp31gtcOAI8JIp8fgCvk7KwdimyxIUQ2cYxsZPnG0cy\n \ logger.go:42: 07:00:32 | watcher-tls-certs-change/1-deploy-with-tlse | wL4XpOQoqp0ElzP3DPP1FUOAnx7ctKdash5DUzBT+84oA2Hab8yd3D9RWVXgYTYP\n \ logger.go:42: 07:00:32 | watcher-tls-certs-change/1-deploy-with-tlse | tgSvfD60T1dJeiiJa5kBj0Gfj+0gKtYNueYrOo65hlC6Smd2tP/7av+xLtQ2b1JS\n \ logger.go:42: 07:00:32 | watcher-tls-certs-change/1-deploy-with-tlse | tFAWNznSY+2n+8SyIH6jjDb/oBYfANPOIcfU/4X8dTGzLgkpMMXK3KyXfzgD9EW+\n \ logger.go:42: 07:00:32 | watcher-tls-certs-change/1-deploy-with-tlse | HexV+da9g5Ix/I5gxku5bd0DWz/ovW+1o/uYMgB5ss9oqd8YKKSqkptXfAtoGToA\n \ logger.go:42: 07:00:32 | watcher-tls-certs-change/1-deploy-with-tlse | KG8QccenVwR7kEcCAwEAAaOBjzCBjDAOBgNVHQ8BAf8EBAMCBaAwEwYDVR0lBAww\n \ logger.go:42: 07:00:32 | watcher-tls-certs-change/1-deploy-with-tlse | CgYIKwYBBQUHAwEwDAYDVR0TAQH/BAIwADAfBgNVHSMEGDAWgBSy3xzAKuvmrfpX\n \ logger.go:42: 07:00:32 | watcher-tls-certs-change/1-deploy-with-tlse | EgmGJVm/ieLg8TA2BgNVHREBAf8ELDAqgih3YXRjaGVyLXB1YmxpYy53YXRjaGVy\n \ logger.go:42: 07:00:32 | watcher-tls-certs-change/1-deploy-with-tlse | LWt1dHRsLWRlZmF1bHQuc3ZjMA0GCSqGSIb3DQEBDAUAA4IBgQBmitaOVryjB1Fy\n \ logger.go:42: 07:00:32 | watcher-tls-certs-change/1-deploy-with-tlse | o+uQ7q1CfFEXu4uk57VffzLi6GFAot2u9gqfKm5Bx/JFKe0HvUHxfyum8oGhg8Mm\n \ logger.go:42: 07:00:32 | watcher-tls-certs-change/1-deploy-with-tlse | lF7bosLeepyWZeFYbXLZ3eCDrTyQRPPc8j6O8R9Ygg2wvIhwIrADK1/KTQuPVFW5\n \ logger.go:42: 07:00:32 | watcher-tls-certs-change/1-deploy-with-tlse | DVaC7MERErbmczp6DYiELdgcqlJimK3Vp/11qNtk1Kc7wVLI2Y36CzhwVMXOBW00\n \ logger.go:42: 07:00:32 | watcher-tls-certs-change/1-deploy-with-tlse | XXQxvYoWLH8WjhVMN9L0ZfTLdQIwL4gt0B3R7Uh8HR/qMCwW4RhIByG/el6wH1yh\n \ logger.go:42: 07:00:32 | watcher-tls-certs-change/1-deploy-with-tlse | D5gkY2Jkbf1vvmmw6MBLkx5O8aL145sW8fDKMGrmRj+eO5ALLeYMdmQZXqwZ4URw\n \ logger.go:42: 07:00:32 | watcher-tls-certs-change/1-deploy-with-tlse | X1tyjzN1ntrcBJf6xnuMLfT1v9RPMCT4CPeb23M5lqvwQXdrbiarkLQ9n/mBPUA6\n \ logger.go:42: 07:00:32 | watcher-tls-certs-change/1-deploy-with-tlse | lUYWBkFqYg9XnAgkWTeehQ7+AeOtvhj6gu3st6nEkOjJs9ri3Jris8kGL9Ay5yMy\n \ logger.go:42: 07:00:32 | watcher-tls-certs-change/1-deploy-with-tlse | af6ByIVzqJuVxr9GJiFIvhx+t2NEelQUiFnN1465Q67x/AELrfo=\n \ logger.go:42: 07:00:32 | watcher-tls-certs-change/1-deploy-with-tlse | -----END CERTIFICATE-----'\n logger.go:42: 07:00:32 | watcher-tls-certs-change/1-deploy-with-tlse | ++ oc get -n watcher-kuttl-default secret cert-watcher-public-svc -o 'jsonpath={.data.tls\\.crt}'\n \ logger.go:42: 07:00:32 | watcher-tls-certs-change/1-deploy-with-tlse | ++ base64 --decode\n logger.go:42: 07:00:32 | watcher-tls-certs-change/1-deploy-with-tlse | + public_secret_cert='-----BEGIN CERTIFICATE-----\n logger.go:42: 07:00:32 | watcher-tls-certs-change/1-deploy-with-tlse | MIIDsjCCAhqgAwIBAgIQFVNmGwlW4BZq0NMjqP97cTANBgkqhkiG9w0BAQwFADAY\n \ logger.go:42: 07:00:32 | watcher-tls-certs-change/1-deploy-with-tlse | MRYwFAYDVQQDEw1yb290Y2EtcHVibGljMB4XDTI2MDEyMjA3MDAwN1oXDTMxMDEy\n \ logger.go:42: 07:00:32 | watcher-tls-certs-change/1-deploy-with-tlse | MTA3MDAwN1owADCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAMpNXlNH\n \ logger.go:42: 07:00:32 | watcher-tls-certs-change/1-deploy-with-tlse | +PgDkPuiPtY2Iy+iOVTneCp31gtcOAI8JIp8fgCvk7KwdimyxIUQ2cYxsZPnG0cy\n \ logger.go:42: 07:00:32 | watcher-tls-certs-change/1-deploy-with-tlse | wL4XpOQoqp0ElzP3DPP1FUOAnx7ctKdash5DUzBT+84oA2Hab8yd3D9RWVXgYTYP\n \ logger.go:42: 07:00:32 | watcher-tls-certs-change/1-deploy-with-tlse | tgSvfD60T1dJeiiJa5kBj0Gfj+0gKtYNueYrOo65hlC6Smd2tP/7av+xLtQ2b1JS\n \ logger.go:42: 07:00:32 | watcher-tls-certs-change/1-deploy-with-tlse | tFAWNznSY+2n+8SyIH6jjDb/oBYfANPOIcfU/4X8dTGzLgkpMMXK3KyXfzgD9EW+\n \ logger.go:42: 07:00:32 | watcher-tls-certs-change/1-deploy-with-tlse | HexV+da9g5Ix/I5gxku5bd0DWz/ovW+1o/uYMgB5ss9oqd8YKKSqkptXfAtoGToA\n \ logger.go:42: 07:00:32 | watcher-tls-certs-change/1-deploy-with-tlse | KG8QccenVwR7kEcCAwEAAaOBjzCBjDAOBgNVHQ8BAf8EBAMCBaAwEwYDVR0lBAww\n \ logger.go:42: 07:00:32 | watcher-tls-certs-change/1-deploy-with-tlse | CgYIKwYBBQUHAwEwDAYDVR0TAQH/BAIwADAfBgNVHSMEGDAWgBSy3xzAKuvmrfpX\n \ logger.go:42: 07:00:32 | watcher-tls-certs-change/1-deploy-with-tlse | EgmGJVm/ieLg8TA2BgNVHREBAf8ELDAqgih3YXRjaGVyLXB1YmxpYy53YXRjaGVy\n \ logger.go:42: 07:00:32 | watcher-tls-certs-change/1-deploy-with-tlse | LWt1dHRsLWRlZmF1bHQuc3ZjMA0GCSqGSIb3DQEBDAUAA4IBgQBmitaOVryjB1Fy\n \ logger.go:42: 07:00:32 | watcher-tls-certs-change/1-deploy-with-tlse | o+uQ7q1CfFEXu4uk57VffzLi6GFAot2u9gqfKm5Bx/JFKe0HvUHxfyum8oGhg8Mm\n \ logger.go:42: 07:00:32 | watcher-tls-certs-change/1-deploy-with-tlse | lF7bosLeepyWZeFYbXLZ3eCDrTyQRPPc8j6O8R9Ygg2wvIhwIrADK1/KTQuPVFW5\n \ logger.go:42: 07:00:32 | watcher-tls-certs-change/1-deploy-with-tlse | DVaC7MERErbmczp6DYiELdgcqlJimK3Vp/11qNtk1Kc7wVLI2Y36CzhwVMXOBW00\n \ logger.go:42: 07:00:32 | watcher-tls-certs-change/1-deploy-with-tlse | XXQxvYoWLH8WjhVMN9L0ZfTLdQIwL4gt0B3R7Uh8HR/qMCwW4RhIByG/el6wH1yh\n \ logger.go:42: 07:00:32 | watcher-tls-certs-change/1-deploy-with-tlse | D5gkY2Jkbf1vvmmw6MBLkx5O8aL145sW8fDKMGrmRj+eO5ALLeYMdmQZXqwZ4URw\n \ logger.go:42: 07:00:32 | watcher-tls-certs-change/1-deploy-with-tlse | X1tyjzN1ntrcBJf6xnuMLfT1v9RPMCT4CPeb23M5lqvwQXdrbiarkLQ9n/mBPUA6\n \ logger.go:42: 07:00:32 | watcher-tls-certs-change/1-deploy-with-tlse | lUYWBkFqYg9XnAgkWTeehQ7+AeOtvhj6gu3st6nEkOjJs9ri3Jris8kGL9Ay5yMy\n \ logger.go:42: 07:00:32 | watcher-tls-certs-change/1-deploy-with-tlse | af6ByIVzqJuVxr9GJiFIvhx+t2NEelQUiFnN1465Q67x/AELrfo=\n \ logger.go:42: 07:00:32 | watcher-tls-certs-change/1-deploy-with-tlse | -----END CERTIFICATE-----'\n logger.go:42: 07:00:32 | watcher-tls-certs-change/1-deploy-with-tlse | + '[' '-----BEGIN CERTIFICATE-----\n logger.go:42: 07:00:32 | watcher-tls-certs-change/1-deploy-with-tlse | MIIDsjCCAhqgAwIBAgIQFVNmGwlW4BZq0NMjqP97cTANBgkqhkiG9w0BAQwFADAY\n logger.go:42: 07:00:32 | watcher-tls-certs-change/1-deploy-with-tlse | MRYwFAYDVQQDEw1yb290Y2EtcHVibGljMB4XDTI2MDEyMjA3MDAwN1oXDTMxMDEy\n \ logger.go:42: 07:00:32 | watcher-tls-certs-change/1-deploy-with-tlse | MTA3MDAwN1owADCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAMpNXlNH\n \ logger.go:42: 07:00:32 | watcher-tls-certs-change/1-deploy-with-tlse | +PgDkPuiPtY2Iy+iOVTneCp31gtcOAI8JIp8fgCvk7KwdimyxIUQ2cYxsZPnG0cy\n \ logger.go:42: 07:00:32 | watcher-tls-certs-change/1-deploy-with-tlse | wL4XpOQoqp0ElzP3DPP1FUOAnx7ctKdash5DUzBT+84oA2Hab8yd3D9RWVXgYTYP\n \ logger.go:42: 07:00:32 | watcher-tls-certs-change/1-deploy-with-tlse | tgSvfD60T1dJeiiJa5kBj0Gfj+0gKtYNueYrOo65hlC6Smd2tP/7av+xLtQ2b1JS\n \ logger.go:42: 07:00:32 | watcher-tls-certs-change/1-deploy-with-tlse | tFAWNznSY+2n+8SyIH6jjDb/oBYfANPOIcfU/4X8dTGzLgkpMMXK3KyXfzgD9EW+\n \ logger.go:42: 07:00:32 | watcher-tls-certs-change/1-deploy-with-tlse | HexV+da9g5Ix/I5gxku5bd0DWz/ovW+1o/uYMgB5ss9oqd8YKKSqkptXfAtoGToA\n \ logger.go:42: 07:00:32 | watcher-tls-certs-change/1-deploy-with-tlse | KG8QccenVwR7kEcCAwEAAaOBjzCBjDAOBgNVHQ8BAf8EBAMCBaAwEwYDVR0lBAww\n \ logger.go:42: 07:00:32 | watcher-tls-certs-change/1-deploy-with-tlse | CgYIKwYBBQUHAwEwDAYDVR0TAQH/BAIwADAfBgNVHSMEGDAWgBSy3xzAKuvmrfpX\n \ logger.go:42: 07:00:32 | watcher-tls-certs-change/1-deploy-with-tlse | EgmGJVm/ieLg8TA2BgNVHREBAf8ELDAqgih3YXRjaGVyLXB1YmxpYy53YXRjaGVy\n \ logger.go:42: 07:00:32 | watcher-tls-certs-change/1-deploy-with-tlse | LWt1dHRsLWRlZmF1bHQuc3ZjMA0GCSqGSIb3DQEBDAUAA4IBgQBmitaOVryjB1Fy\n \ logger.go:42: 07:00:32 | watcher-tls-certs-change/1-deploy-with-tlse | o+uQ7q1CfFEXu4uk57VffzLi6GFAot2u9gqfKm5Bx/JFKe0HvUHxfyum8oGhg8Mm\n \ logger.go:42: 07:00:32 | watcher-tls-certs-change/1-deploy-with-tlse | lF7bosLeepyWZeFYbXLZ3eCDrTyQRPPc8j6O8R9Ygg2wvIhwIrADK1/KTQuPVFW5\n \ logger.go:42: 07:00:32 | watcher-tls-certs-change/1-deploy-with-tlse | DVaC7MERErbmczp6DYiELdgcqlJimK3Vp/11qNtk1Kc7wVLI2Y36CzhwVMXOBW00\n \ logger.go:42: 07:00:32 | watcher-tls-certs-change/1-deploy-with-tlse | XXQxvYoWLH8WjhVMN9L0ZfTLdQIwL4gt0B3R7Uh8HR/qMCwW4RhIByG/el6wH1yh\n \ logger.go:42: 07:00:32 | watcher-tls-certs-change/1-deploy-with-tlse | D5gkY2Jkbf1vvmmw6MBLkx5O8aL145sW8fDKMGrmRj+eO5ALLeYMdmQZXqwZ4URw\n \ logger.go:42: 07:00:32 | watcher-tls-certs-change/1-deploy-with-tlse | X1tyjzN1ntrcBJf6xnuMLfT1v9RPMCT4CPeb23M5lqvwQXdrbiarkLQ9n/mBPUA6\n \ logger.go:42: 07:00:32 | watcher-tls-certs-change/1-deploy-with-tlse | lUYWBkFqYg9XnAgkWTeehQ7+AeOtvhj6gu3st6nEkOjJs9ri3Jris8kGL9Ay5yMy\n \ logger.go:42: 07:00:32 | watcher-tls-certs-change/1-deploy-with-tlse | af6ByIVzqJuVxr9GJiFIvhx+t2NEelQUiFnN1465Q67x/AELrfo=\n \ logger.go:42: 07:00:32 | watcher-tls-certs-change/1-deploy-with-tlse | -----END CERTIFICATE-----' '!=' '-----BEGIN CERTIFICATE-----\n logger.go:42: 07:00:32 | watcher-tls-certs-change/1-deploy-with-tlse | MIIDsjCCAhqgAwIBAgIQFVNmGwlW4BZq0NMjqP97cTANBgkqhkiG9w0BAQwFADAY\n \ logger.go:42: 07:00:32 | watcher-tls-certs-change/1-deploy-with-tlse | MRYwFAYDVQQDEw1yb290Y2EtcHVibGljMB4XDTI2MDEyMjA3MDAwN1oXDTMxMDEy\n \ logger.go:42: 07:00:32 | watcher-tls-certs-change/1-deploy-with-tlse | MTA3MDAwN1owADCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAMpNXlNH\n \ logger.go:42: 07:00:32 | watcher-tls-certs-change/1-deploy-with-tlse | +PgDkPuiPtY2Iy+iOVTneCp31gtcOAI8JIp8fgCvk7KwdimyxIUQ2cYxsZPnG0cy\n \ logger.go:42: 07:00:32 | watcher-tls-certs-change/1-deploy-with-tlse | wL4XpOQoqp0ElzP3DPP1FUOAnx7ctKdash5DUzBT+84oA2Hab8yd3D9RWVXgYTYP\n \ logger.go:42: 07:00:32 | watcher-tls-certs-change/1-deploy-with-tlse | tgSvfD60T1dJeiiJa5kBj0Gfj+0gKtYNueYrOo65hlC6Smd2tP/7av+xLtQ2b1JS\n \ logger.go:42: 07:00:32 | watcher-tls-certs-change/1-deploy-with-tlse | tFAWNznSY+2n+8SyIH6jjDb/oBYfANPOIcfU/4X8dTGzLgkpMMXK3KyXfzgD9EW+\n \ logger.go:42: 07:00:32 | watcher-tls-certs-change/1-deploy-with-tlse | HexV+da9g5Ix/I5gxku5bd0DWz/ovW+1o/uYMgB5ss9oqd8YKKSqkptXfAtoGToA\n \ logger.go:42: 07:00:32 | watcher-tls-certs-change/1-deploy-with-tlse | KG8QccenVwR7kEcCAwEAAaOBjzCBjDAOBgNVHQ8BAf8EBAMCBaAwEwYDVR0lBAww\n \ logger.go:42: 07:00:32 | watcher-tls-certs-change/1-deploy-with-tlse | CgYIKwYBBQUHAwEwDAYDVR0TAQH/BAIwADAfBgNVHSMEGDAWgBSy3xzAKuvmrfpX\n \ logger.go:42: 07:00:32 | watcher-tls-certs-change/1-deploy-with-tlse | EgmGJVm/ieLg8TA2BgNVHREBAf8ELDAqgih3YXRjaGVyLXB1YmxpYy53YXRjaGVy\n \ logger.go:42: 07:00:32 | watcher-tls-certs-change/1-deploy-with-tlse | LWt1dHRsLWRlZmF1bHQuc3ZjMA0GCSqGSIb3DQEBDAUAA4IBgQBmitaOVryjB1Fy\n \ logger.go:42: 07:00:32 | watcher-tls-certs-change/1-deploy-with-tlse | o+uQ7q1CfFEXu4uk57VffzLi6GFAot2u9gqfKm5Bx/JFKe0HvUHxfyum8oGhg8Mm\n \ logger.go:42: 07:00:32 | watcher-tls-certs-change/1-deploy-with-tlse | lF7bosLeepyWZeFYbXLZ3eCDrTyQRPPc8j6O8R9Ygg2wvIhwIrADK1/KTQuPVFW5\n \ logger.go:42: 07:00:32 | watcher-tls-certs-change/1-deploy-with-tlse | DVaC7MERErbmczp6DYiELdgcqlJimK3Vp/11qNtk1Kc7wVLI2Y36CzhwVMXOBW00\n \ logger.go:42: 07:00:32 | watcher-tls-certs-change/1-deploy-with-tlse | XXQxvYoWLH8WjhVMN9L0ZfTLdQIwL4gt0B3R7Uh8HR/qMCwW4RhIByG/el6wH1yh\n \ logger.go:42: 07:00:32 | watcher-tls-certs-change/1-deploy-with-tlse | D5gkY2Jkbf1vvmmw6MBLkx5O8aL145sW8fDKMGrmRj+eO5ALLeYMdmQZXqwZ4URw\n \ logger.go:42: 07:00:32 | watcher-tls-certs-change/1-deploy-with-tlse | X1tyjzN1ntrcBJf6xnuMLfT1v9RPMCT4CPeb23M5lqvwQXdrbiarkLQ9n/mBPUA6\n \ logger.go:42: 07:00:32 | watcher-tls-certs-change/1-deploy-with-tlse | lUYWBkFqYg9XnAgkWTeehQ7+AeOtvhj6gu3st6nEkOjJs9ri3Jris8kGL9Ay5yMy\n \ logger.go:42: 07:00:32 | watcher-tls-certs-change/1-deploy-with-tlse | af6ByIVzqJuVxr9GJiFIvhx+t2NEelQUiFnN1465Q67x/AELrfo=\n \ logger.go:42: 07:00:32 | watcher-tls-certs-change/1-deploy-with-tlse | -----END CERTIFICATE-----' ']'\n logger.go:42: 07:00:32 | watcher-tls-certs-change/1-deploy-with-tlse | ++ oc rsh -n watcher-kuttl-default -c watcher-api watcher-kuttl-api-0 cat /etc/pki/tls/certs/internal.crt\n \ logger.go:42: 07:00:33 | watcher-tls-certs-change/1-deploy-with-tlse | + internal_svc_cert='-----BEGIN CERTIFICATE-----\n logger.go:42: 07:00:33 | watcher-tls-certs-change/1-deploy-with-tlse | MIIDtzCCAh+gAwIBAgIRAMdtA66/UVErgQ6jB2Mo7g8wDQYJKoZIhvcNAQEMBQAw\n logger.go:42: 07:00:33 | watcher-tls-certs-change/1-deploy-with-tlse | GjEYMBYGA1UEAxMPcm9vdGNhLWludGVybmFsMB4XDTI2MDEyMjA3MDAwN1oXDTMx\n \ logger.go:42: 07:00:33 | watcher-tls-certs-change/1-deploy-with-tlse | MDEyMTA3MDAwN1owADCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAOzY\n \ logger.go:42: 07:00:33 | watcher-tls-certs-change/1-deploy-with-tlse | hUVeRLlVB60sJbdpaQAJI2vpbeEEDa91MsWoYlWF58nvtb17k8KGvpPNemDF4UvD\n \ logger.go:42: 07:00:33 | watcher-tls-certs-change/1-deploy-with-tlse | kRm+oSlbUhWwlbwI1Q3VFz4JhvU12PvdUhV4e5PbNGULUDGpAbr04w098HEQSUj+\n \ logger.go:42: 07:00:33 | watcher-tls-certs-change/1-deploy-with-tlse | D/44rQFA0PbvLa//wISJcF6NG2kZiiAFAlrfYJH5tJM2nbdDOvHRY2FScNwLw3aE\n \ logger.go:42: 07:00:33 | watcher-tls-certs-change/1-deploy-with-tlse | 1UGIp9tmMA1on0l+9tBcndNzSq8klq5TdwDo8Pwrqz8m37ZXvwD1gizzDXwZhiPN\n \ logger.go:42: 07:00:33 | watcher-tls-certs-change/1-deploy-with-tlse | KIvLKp0hAxvswvj5Rib80zGQX6JjJEcOBFZ4GT/e8WUjrsJ6JsTygAtF9Y8xwchO\n \ logger.go:42: 07:00:33 | watcher-tls-certs-change/1-deploy-with-tlse | Ce4xdKeeGZ8hDK2ojFcCAwEAAaOBkTCBjjAOBgNVHQ8BAf8EBAMCBaAwEwYDVR0l\n \ logger.go:42: 07:00:33 | watcher-tls-certs-change/1-deploy-with-tlse | BAwwCgYIKwYBBQUHAwEwDAYDVR0TAQH/BAIwADAfBgNVHSMEGDAWgBSEvgxWYqXo\n \ logger.go:42: 07:00:33 | watcher-tls-certs-change/1-deploy-with-tlse | VaasRUQLJzZAsO/HJTA4BgNVHREBAf8ELjAsgip3YXRjaGVyLWludGVybmFsLndh\n \ logger.go:42: 07:00:33 | watcher-tls-certs-change/1-deploy-with-tlse | dGNoZXIta3V0dGwtZGVmYXVsdC5zdmMwDQYJKoZIhvcNAQEMBQADggGBAJzAl8kI\n \ logger.go:42: 07:00:33 | watcher-tls-certs-change/1-deploy-with-tlse | fRCmnFBnPIq9+3XfefG0pybCCtUusBuoATJYDHmH/379Rb4zeN6bqXg7PP36qVao\n \ logger.go:42: 07:00:33 | watcher-tls-certs-change/1-deploy-with-tlse | 98lAd5LFEieOBPJq2+QXo8CWrLcbItzbOHdFeVStfpuQqaD2OMYLLepP/IpAKT+0\n \ logger.go:42: 07:00:33 | watcher-tls-certs-change/1-deploy-with-tlse | +GEHqjJsaHF3xj6g5HcvQzhSaQh3Pg8ffKmqJKQg95PF1BPQ110MawKYh9MBibPr\n \ logger.go:42: 07:00:33 | watcher-tls-certs-change/1-deploy-with-tlse | b0myrALIel2CeIeWqm+VMrMI8en/gFxJWTfg58GcQbRiUDzWa2BHE1mEoXmOLNDt\n \ logger.go:42: 07:00:33 | watcher-tls-certs-change/1-deploy-with-tlse | L1iYWnFugdfAycs2+uAkPMbpav3Oi3WJX1N1iKRklNv5Cq8oQ++b8aoR2RZPnJJE\n \ logger.go:42: 07:00:33 | watcher-tls-certs-change/1-deploy-with-tlse | ppmkEBU4Kt94NO5B/A2PnjkM8+yOQjhjVnyD5Ccp8kHx0MN7H1OW9J0m4JEPjP/c\n \ logger.go:42: 07:00:33 | watcher-tls-certs-change/1-deploy-with-tlse | uRTxWb4yoJl6Igdfy3tYgldteKztKgf4tDlkR/Htsf8GVL+zbsnTseYY5BakyygP\n \ logger.go:42: 07:00:33 | watcher-tls-certs-change/1-deploy-with-tlse | AHnxS0D1QCA5Zv5craVd+a2ZCNvbnSdl4pwShL796A0cNZGnAKDD1LwcYQ==\n \ logger.go:42: 07:00:33 | watcher-tls-certs-change/1-deploy-with-tlse | -----END CERTIFICATE-----'\n logger.go:42: 07:00:33 | watcher-tls-certs-change/1-deploy-with-tlse | ++ oc get -n watcher-kuttl-default secret cert-watcher-internal-svc -o 'jsonpath={.data.tls\\.crt}'\n \ logger.go:42: 07:00:33 | watcher-tls-certs-change/1-deploy-with-tlse | ++ base64 --decode\n logger.go:42: 07:00:33 | watcher-tls-certs-change/1-deploy-with-tlse | + internal_secret_cert='-----BEGIN CERTIFICATE-----\n logger.go:42: 07:00:33 | watcher-tls-certs-change/1-deploy-with-tlse | MIIDtzCCAh+gAwIBAgIRAMdtA66/UVErgQ6jB2Mo7g8wDQYJKoZIhvcNAQEMBQAw\n \ logger.go:42: 07:00:33 | watcher-tls-certs-change/1-deploy-with-tlse | GjEYMBYGA1UEAxMPcm9vdGNhLWludGVybmFsMB4XDTI2MDEyMjA3MDAwN1oXDTMx\n \ logger.go:42: 07:00:33 | watcher-tls-certs-change/1-deploy-with-tlse | MDEyMTA3MDAwN1owADCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAOzY\n \ logger.go:42: 07:00:33 | watcher-tls-certs-change/1-deploy-with-tlse | hUVeRLlVB60sJbdpaQAJI2vpbeEEDa91MsWoYlWF58nvtb17k8KGvpPNemDF4UvD\n \ logger.go:42: 07:00:33 | watcher-tls-certs-change/1-deploy-with-tlse | kRm+oSlbUhWwlbwI1Q3VFz4JhvU12PvdUhV4e5PbNGULUDGpAbr04w098HEQSUj+\n \ logger.go:42: 07:00:33 | watcher-tls-certs-change/1-deploy-with-tlse | D/44rQFA0PbvLa//wISJcF6NG2kZiiAFAlrfYJH5tJM2nbdDOvHRY2FScNwLw3aE\n \ logger.go:42: 07:00:33 | watcher-tls-certs-change/1-deploy-with-tlse | 1UGIp9tmMA1on0l+9tBcndNzSq8klq5TdwDo8Pwrqz8m37ZXvwD1gizzDXwZhiPN\n \ logger.go:42: 07:00:33 | watcher-tls-certs-change/1-deploy-with-tlse | KIvLKp0hAxvswvj5Rib80zGQX6JjJEcOBFZ4GT/e8WUjrsJ6JsTygAtF9Y8xwchO\n \ logger.go:42: 07:00:33 | watcher-tls-certs-change/1-deploy-with-tlse | Ce4xdKeeGZ8hDK2ojFcCAwEAAaOBkTCBjjAOBgNVHQ8BAf8EBAMCBaAwEwYDVR0l\n \ logger.go:42: 07:00:33 | watcher-tls-certs-change/1-deploy-with-tlse | BAwwCgYIKwYBBQUHAwEwDAYDVR0TAQH/BAIwADAfBgNVHSMEGDAWgBSEvgxWYqXo\n \ logger.go:42: 07:00:33 | watcher-tls-certs-change/1-deploy-with-tlse | VaasRUQLJzZAsO/HJTA4BgNVHREBAf8ELjAsgip3YXRjaGVyLWludGVybmFsLndh\n \ logger.go:42: 07:00:33 | watcher-tls-certs-change/1-deploy-with-tlse | dGNoZXIta3V0dGwtZGVmYXVsdC5zdmMwDQYJKoZIhvcNAQEMBQADggGBAJzAl8kI\n \ logger.go:42: 07:00:33 | watcher-tls-certs-change/1-deploy-with-tlse | fRCmnFBnPIq9+3XfefG0pybCCtUusBuoATJYDHmH/379Rb4zeN6bqXg7PP36qVao\n \ logger.go:42: 07:00:33 | watcher-tls-certs-change/1-deploy-with-tlse | 98lAd5LFEieOBPJq2+QXo8CWrLcbItzbOHdFeVStfpuQqaD2OMYLLepP/IpAKT+0\n \ logger.go:42: 07:00:33 | watcher-tls-certs-change/1-deploy-with-tlse | +GEHqjJsaHF3xj6g5HcvQzhSaQh3Pg8ffKmqJKQg95PF1BPQ110MawKYh9MBibPr\n \ logger.go:42: 07:00:33 | watcher-tls-certs-change/1-deploy-with-tlse | b0myrALIel2CeIeWqm+VMrMI8en/gFxJWTfg58GcQbRiUDzWa2BHE1mEoXmOLNDt\n \ logger.go:42: 07:00:33 | watcher-tls-certs-change/1-deploy-with-tlse | L1iYWnFugdfAycs2+uAkPMbpav3Oi3WJX1N1iKRklNv5Cq8oQ++b8aoR2RZPnJJE\n \ logger.go:42: 07:00:33 | watcher-tls-certs-change/1-deploy-with-tlse | ppmkEBU4Kt94NO5B/A2PnjkM8+yOQjhjVnyD5Ccp8kHx0MN7H1OW9J0m4JEPjP/c\n \ logger.go:42: 07:00:33 | watcher-tls-certs-change/1-deploy-with-tlse | uRTxWb4yoJl6Igdfy3tYgldteKztKgf4tDlkR/Htsf8GVL+zbsnTseYY5BakyygP\n \ logger.go:42: 07:00:33 | watcher-tls-certs-change/1-deploy-with-tlse | AHnxS0D1QCA5Zv5craVd+a2ZCNvbnSdl4pwShL796A0cNZGnAKDD1LwcYQ==\n \ logger.go:42: 07:00:33 | watcher-tls-certs-change/1-deploy-with-tlse | -----END CERTIFICATE-----'\n logger.go:42: 07:00:33 | watcher-tls-certs-change/1-deploy-with-tlse | + '[' '-----BEGIN CERTIFICATE-----\n logger.go:42: 07:00:33 | watcher-tls-certs-change/1-deploy-with-tlse | MIIDtzCCAh+gAwIBAgIRAMdtA66/UVErgQ6jB2Mo7g8wDQYJKoZIhvcNAQEMBQAw\n logger.go:42: 07:00:33 | watcher-tls-certs-change/1-deploy-with-tlse | GjEYMBYGA1UEAxMPcm9vdGNhLWludGVybmFsMB4XDTI2MDEyMjA3MDAwN1oXDTMx\n \ logger.go:42: 07:00:33 | watcher-tls-certs-change/1-deploy-with-tlse | MDEyMTA3MDAwN1owADCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAOzY\n \ logger.go:42: 07:00:33 | watcher-tls-certs-change/1-deploy-with-tlse | hUVeRLlVB60sJbdpaQAJI2vpbeEEDa91MsWoYlWF58nvtb17k8KGvpPNemDF4UvD\n \ logger.go:42: 07:00:33 | watcher-tls-certs-change/1-deploy-with-tlse | kRm+oSlbUhWwlbwI1Q3VFz4JhvU12PvdUhV4e5PbNGULUDGpAbr04w098HEQSUj+\n \ logger.go:42: 07:00:33 | watcher-tls-certs-change/1-deploy-with-tlse | D/44rQFA0PbvLa//wISJcF6NG2kZiiAFAlrfYJH5tJM2nbdDOvHRY2FScNwLw3aE\n \ logger.go:42: 07:00:33 | watcher-tls-certs-change/1-deploy-with-tlse | 1UGIp9tmMA1on0l+9tBcndNzSq8klq5TdwDo8Pwrqz8m37ZXvwD1gizzDXwZhiPN\n \ logger.go:42: 07:00:33 | watcher-tls-certs-change/1-deploy-with-tlse | KIvLKp0hAxvswvj5Rib80zGQX6JjJEcOBFZ4GT/e8WUjrsJ6JsTygAtF9Y8xwchO\n \ logger.go:42: 07:00:33 | watcher-tls-certs-change/1-deploy-with-tlse | Ce4xdKeeGZ8hDK2ojFcCAwEAAaOBkTCBjjAOBgNVHQ8BAf8EBAMCBaAwEwYDVR0l\n \ logger.go:42: 07:00:33 | watcher-tls-certs-change/1-deploy-with-tlse | BAwwCgYIKwYBBQUHAwEwDAYDVR0TAQH/BAIwADAfBgNVHSMEGDAWgBSEvgxWYqXo\n \ logger.go:42: 07:00:33 | watcher-tls-certs-change/1-deploy-with-tlse | VaasRUQLJzZAsO/HJTA4BgNVHREBAf8ELjAsgip3YXRjaGVyLWludGVybmFsLndh\n \ logger.go:42: 07:00:33 | watcher-tls-certs-change/1-deploy-with-tlse | dGNoZXIta3V0dGwtZGVmYXVsdC5zdmMwDQYJKoZIhvcNAQEMBQADggGBAJzAl8kI\n \ logger.go:42: 07:00:33 | watcher-tls-certs-change/1-deploy-with-tlse | fRCmnFBnPIq9+3XfefG0pybCCtUusBuoATJYDHmH/379Rb4zeN6bqXg7PP36qVao\n \ logger.go:42: 07:00:33 | watcher-tls-certs-change/1-deploy-with-tlse | 98lAd5LFEieOBPJq2+QXo8CWrLcbItzbOHdFeVStfpuQqaD2OMYLLepP/IpAKT+0\n \ logger.go:42: 07:00:33 | watcher-tls-certs-change/1-deploy-with-tlse | +GEHqjJsaHF3xj6g5HcvQzhSaQh3Pg8ffKmqJKQg95PF1BPQ110MawKYh9MBibPr\n \ logger.go:42: 07:00:33 | watcher-tls-certs-change/1-deploy-with-tlse | b0myrALIel2CeIeWqm+VMrMI8en/gFxJWTfg58GcQbRiUDzWa2BHE1mEoXmOLNDt\n \ logger.go:42: 07:00:33 | watcher-tls-certs-change/1-deploy-with-tlse | L1iYWnFugdfAycs2+uAkPMbpav3Oi3WJX1N1iKRklNv5Cq8oQ++b8aoR2RZPnJJE\n \ logger.go:42: 07:00:33 | watcher-tls-certs-change/1-deploy-with-tlse | ppmkEBU4Kt94NO5B/A2PnjkM8+yOQjhjVnyD5Ccp8kHx0MN7H1OW9J0m4JEPjP/c\n \ logger.go:42: 07:00:33 | watcher-tls-certs-change/1-deploy-with-tlse | uRTxWb4yoJl6Igdfy3tYgldteKztKgf4tDlkR/Htsf8GVL+zbsnTseYY5BakyygP\n \ logger.go:42: 07:00:33 | watcher-tls-certs-change/1-deploy-with-tlse | AHnxS0D1QCA5Zv5craVd+a2ZCNvbnSdl4pwShL796A0cNZGnAKDD1LwcYQ==\n \ logger.go:42: 07:00:33 | watcher-tls-certs-change/1-deploy-with-tlse | -----END CERTIFICATE-----' '!=' '-----BEGIN CERTIFICATE-----\n logger.go:42: 07:00:33 | watcher-tls-certs-change/1-deploy-with-tlse | MIIDtzCCAh+gAwIBAgIRAMdtA66/UVErgQ6jB2Mo7g8wDQYJKoZIhvcNAQEMBQAw\n \ logger.go:42: 07:00:33 | watcher-tls-certs-change/1-deploy-with-tlse | GjEYMBYGA1UEAxMPcm9vdGNhLWludGVybmFsMB4XDTI2MDEyMjA3MDAwN1oXDTMx\n \ logger.go:42: 07:00:33 | watcher-tls-certs-change/1-deploy-with-tlse | MDEyMTA3MDAwN1owADCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAOzY\n \ logger.go:42: 07:00:33 | watcher-tls-certs-change/1-deploy-with-tlse | hUVeRLlVB60sJbdpaQAJI2vpbeEEDa91MsWoYlWF58nvtb17k8KGvpPNemDF4UvD\n \ logger.go:42: 07:00:33 | watcher-tls-certs-change/1-deploy-with-tlse | kRm+oSlbUhWwlbwI1Q3VFz4JhvU12PvdUhV4e5PbNGULUDGpAbr04w098HEQSUj+\n \ logger.go:42: 07:00:33 | watcher-tls-certs-change/1-deploy-with-tlse | D/44rQFA0PbvLa//wISJcF6NG2kZiiAFAlrfYJH5tJM2nbdDOvHRY2FScNwLw3aE\n \ logger.go:42: 07:00:33 | watcher-tls-certs-change/1-deploy-with-tlse | 1UGIp9tmMA1on0l+9tBcndNzSq8klq5TdwDo8Pwrqz8m37ZXvwD1gizzDXwZhiPN\n \ logger.go:42: 07:00:33 | watcher-tls-certs-change/1-deploy-with-tlse | KIvLKp0hAxvswvj5Rib80zGQX6JjJEcOBFZ4GT/e8WUjrsJ6JsTygAtF9Y8xwchO\n \ logger.go:42: 07:00:33 | watcher-tls-certs-change/1-deploy-with-tlse | Ce4xdKeeGZ8hDK2ojFcCAwEAAaOBkTCBjjAOBgNVHQ8BAf8EBAMCBaAwEwYDVR0l\n \ logger.go:42: 07:00:33 | watcher-tls-certs-change/1-deploy-with-tlse | BAwwCgYIKwYBBQUHAwEwDAYDVR0TAQH/BAIwADAfBgNVHSMEGDAWgBSEvgxWYqXo\n \ logger.go:42: 07:00:33 | watcher-tls-certs-change/1-deploy-with-tlse | VaasRUQLJzZAsO/HJTA4BgNVHREBAf8ELjAsgip3YXRjaGVyLWludGVybmFsLndh\n \ logger.go:42: 07:00:33 | watcher-tls-certs-change/1-deploy-with-tlse | dGNoZXIta3V0dGwtZGVmYXVsdC5zdmMwDQYJKoZIhvcNAQEMBQADggGBAJzAl8kI\n \ logger.go:42: 07:00:33 | watcher-tls-certs-change/1-deploy-with-tlse | fRCmnFBnPIq9+3XfefG0pybCCtUusBuoATJYDHmH/379Rb4zeN6bqXg7PP36qVao\n \ logger.go:42: 07:00:33 | watcher-tls-certs-change/1-deploy-with-tlse | 98lAd5LFEieOBPJq2+QXo8CWrLcbItzbOHdFeVStfpuQqaD2OMYLLepP/IpAKT+0\n \ logger.go:42: 07:00:33 | watcher-tls-certs-change/1-deploy-with-tlse | +GEHqjJsaHF3xj6g5HcvQzhSaQh3Pg8ffKmqJKQg95PF1BPQ110MawKYh9MBibPr\n \ logger.go:42: 07:00:33 | watcher-tls-certs-change/1-deploy-with-tlse | b0myrALIel2CeIeWqm+VMrMI8en/gFxJWTfg58GcQbRiUDzWa2BHE1mEoXmOLNDt\n \ logger.go:42: 07:00:33 | watcher-tls-certs-change/1-deploy-with-tlse | L1iYWnFugdfAycs2+uAkPMbpav3Oi3WJX1N1iKRklNv5Cq8oQ++b8aoR2RZPnJJE\n \ logger.go:42: 07:00:33 | watcher-tls-certs-change/1-deploy-with-tlse | ppmkEBU4Kt94NO5B/A2PnjkM8+yOQjhjVnyD5Ccp8kHx0MN7H1OW9J0m4JEPjP/c\n \ logger.go:42: 07:00:33 | watcher-tls-certs-change/1-deploy-with-tlse | uRTxWb4yoJl6Igdfy3tYgldteKztKgf4tDlkR/Htsf8GVL+zbsnTseYY5BakyygP\n \ logger.go:42: 07:00:33 | watcher-tls-certs-change/1-deploy-with-tlse | AHnxS0D1QCA5Zv5craVd+a2ZCNvbnSdl4pwShL796A0cNZGnAKDD1LwcYQ==\n \ logger.go:42: 07:00:33 | watcher-tls-certs-change/1-deploy-with-tlse | -----END CERTIFICATE-----' ']'\n logger.go:42: 07:00:34 | watcher-tls-certs-change/1-deploy-with-tlse | running command: [sh -c set -euxo pipefail\n public_svc_cert=$(oc rsh -n $NAMESPACE -c watcher-api watcher-kuttl-api-0 cat /etc/pki/tls/certs/public.crt)\n \ public_secret_cert=$(oc get -n $NAMESPACE secret cert-watcher-public-svc -o jsonpath='{.data.tls\\.crt}' | base64 --decode)\n # ensure that the svc secret and cert secret match\n if [ \"${public_svc_cert}\" != \"${public_secret_cert}\" ]; then\n exit 1\n fi\n \n internal_svc_cert=$(oc rsh -n $NAMESPACE -c watcher-api watcher-kuttl-api-0 cat /etc/pki/tls/certs/internal.crt)\n \ internal_secret_cert=$(oc get -n $NAMESPACE secret cert-watcher-internal-svc -o jsonpath='{.data.tls\\.crt}' | base64 --decode)\n # ensure that the svc secret and cert secret match\n if [ \"${internal_svc_cert}\" != \"${internal_secret_cert}\" ]; then\n exit 1\n fi\n ]\n logger.go:42: 07:00:34 | watcher-tls-certs-change/1-deploy-with-tlse | ++ oc rsh -n watcher-kuttl-default -c watcher-api watcher-kuttl-api-0 cat /etc/pki/tls/certs/public.crt\n \ logger.go:42: 07:00:34 | watcher-tls-certs-change/1-deploy-with-tlse | + public_svc_cert='-----BEGIN CERTIFICATE-----\n logger.go:42: 07:00:34 | watcher-tls-certs-change/1-deploy-with-tlse | MIIDsjCCAhqgAwIBAgIQFVNmGwlW4BZq0NMjqP97cTANBgkqhkiG9w0BAQwFADAY\n logger.go:42: 07:00:34 | watcher-tls-certs-change/1-deploy-with-tlse | MRYwFAYDVQQDEw1yb290Y2EtcHVibGljMB4XDTI2MDEyMjA3MDAwN1oXDTMxMDEy\n \ logger.go:42: 07:00:34 | watcher-tls-certs-change/1-deploy-with-tlse | MTA3MDAwN1owADCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAMpNXlNH\n \ logger.go:42: 07:00:34 | watcher-tls-certs-change/1-deploy-with-tlse | +PgDkPuiPtY2Iy+iOVTneCp31gtcOAI8JIp8fgCvk7KwdimyxIUQ2cYxsZPnG0cy\n \ logger.go:42: 07:00:34 | watcher-tls-certs-change/1-deploy-with-tlse | wL4XpOQoqp0ElzP3DPP1FUOAnx7ctKdash5DUzBT+84oA2Hab8yd3D9RWVXgYTYP\n \ logger.go:42: 07:00:34 | watcher-tls-certs-change/1-deploy-with-tlse | tgSvfD60T1dJeiiJa5kBj0Gfj+0gKtYNueYrOo65hlC6Smd2tP/7av+xLtQ2b1JS\n \ logger.go:42: 07:00:34 | watcher-tls-certs-change/1-deploy-with-tlse | tFAWNznSY+2n+8SyIH6jjDb/oBYfANPOIcfU/4X8dTGzLgkpMMXK3KyXfzgD9EW+\n \ logger.go:42: 07:00:34 | watcher-tls-certs-change/1-deploy-with-tlse | HexV+da9g5Ix/I5gxku5bd0DWz/ovW+1o/uYMgB5ss9oqd8YKKSqkptXfAtoGToA\n \ logger.go:42: 07:00:34 | watcher-tls-certs-change/1-deploy-with-tlse | KG8QccenVwR7kEcCAwEAAaOBjzCBjDAOBgNVHQ8BAf8EBAMCBaAwEwYDVR0lBAww\n \ logger.go:42: 07:00:34 | watcher-tls-certs-change/1-deploy-with-tlse | CgYIKwYBBQUHAwEwDAYDVR0TAQH/BAIwADAfBgNVHSMEGDAWgBSy3xzAKuvmrfpX\n \ logger.go:42: 07:00:34 | watcher-tls-certs-change/1-deploy-with-tlse | EgmGJVm/ieLg8TA2BgNVHREBAf8ELDAqgih3YXRjaGVyLXB1YmxpYy53YXRjaGVy\n \ logger.go:42: 07:00:34 | watcher-tls-certs-change/1-deploy-with-tlse | LWt1dHRsLWRlZmF1bHQuc3ZjMA0GCSqGSIb3DQEBDAUAA4IBgQBmitaOVryjB1Fy\n \ logger.go:42: 07:00:34 | watcher-tls-certs-change/1-deploy-with-tlse | o+uQ7q1CfFEXu4uk57VffzLi6GFAot2u9gqfKm5Bx/JFKe0HvUHxfyum8oGhg8Mm\n \ logger.go:42: 07:00:34 | watcher-tls-certs-change/1-deploy-with-tlse | lF7bosLeepyWZeFYbXLZ3eCDrTyQRPPc8j6O8R9Ygg2wvIhwIrADK1/KTQuPVFW5\n \ logger.go:42: 07:00:34 | watcher-tls-certs-change/1-deploy-with-tlse | DVaC7MERErbmczp6DYiELdgcqlJimK3Vp/11qNtk1Kc7wVLI2Y36CzhwVMXOBW00\n \ logger.go:42: 07:00:34 | watcher-tls-certs-change/1-deploy-with-tlse | XXQxvYoWLH8WjhVMN9L0ZfTLdQIwL4gt0B3R7Uh8HR/qMCwW4RhIByG/el6wH1yh\n \ logger.go:42: 07:00:34 | watcher-tls-certs-change/1-deploy-with-tlse | D5gkY2Jkbf1vvmmw6MBLkx5O8aL145sW8fDKMGrmRj+eO5ALLeYMdmQZXqwZ4URw\n \ logger.go:42: 07:00:34 | watcher-tls-certs-change/1-deploy-with-tlse | X1tyjzN1ntrcBJf6xnuMLfT1v9RPMCT4CPeb23M5lqvwQXdrbiarkLQ9n/mBPUA6\n \ logger.go:42: 07:00:34 | watcher-tls-certs-change/1-deploy-with-tlse | lUYWBkFqYg9XnAgkWTeehQ7+AeOtvhj6gu3st6nEkOjJs9ri3Jris8kGL9Ay5yMy\n \ logger.go:42: 07:00:34 | watcher-tls-certs-change/1-deploy-with-tlse | af6ByIVzqJuVxr9GJiFIvhx+t2NEelQUiFnN1465Q67x/AELrfo=\n \ logger.go:42: 07:00:34 | watcher-tls-certs-change/1-deploy-with-tlse | -----END CERTIFICATE-----'\n logger.go:42: 07:00:34 | watcher-tls-certs-change/1-deploy-with-tlse | ++ oc get -n watcher-kuttl-default secret cert-watcher-public-svc -o 'jsonpath={.data.tls\\.crt}'\n \ logger.go:42: 07:00:34 | watcher-tls-certs-change/1-deploy-with-tlse | ++ base64 --decode\n logger.go:42: 07:00:34 | watcher-tls-certs-change/1-deploy-with-tlse | + public_secret_cert='-----BEGIN CERTIFICATE-----\n logger.go:42: 07:00:34 | watcher-tls-certs-change/1-deploy-with-tlse | MIIDsjCCAhqgAwIBAgIQFVNmGwlW4BZq0NMjqP97cTANBgkqhkiG9w0BAQwFADAY\n \ logger.go:42: 07:00:34 | watcher-tls-certs-change/1-deploy-with-tlse | MRYwFAYDVQQDEw1yb290Y2EtcHVibGljMB4XDTI2MDEyMjA3MDAwN1oXDTMxMDEy\n \ logger.go:42: 07:00:34 | watcher-tls-certs-change/1-deploy-with-tlse | MTA3MDAwN1owADCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAMpNXlNH\n \ logger.go:42: 07:00:34 | watcher-tls-certs-change/1-deploy-with-tlse | +PgDkPuiPtY2Iy+iOVTneCp31gtcOAI8JIp8fgCvk7KwdimyxIUQ2cYxsZPnG0cy\n \ logger.go:42: 07:00:34 | watcher-tls-certs-change/1-deploy-with-tlse | wL4XpOQoqp0ElzP3DPP1FUOAnx7ctKdash5DUzBT+84oA2Hab8yd3D9RWVXgYTYP\n \ logger.go:42: 07:00:34 | watcher-tls-certs-change/1-deploy-with-tlse | tgSvfD60T1dJeiiJa5kBj0Gfj+0gKtYNueYrOo65hlC6Smd2tP/7av+xLtQ2b1JS\n \ logger.go:42: 07:00:34 | watcher-tls-certs-change/1-deploy-with-tlse | tFAWNznSY+2n+8SyIH6jjDb/oBYfANPOIcfU/4X8dTGzLgkpMMXK3KyXfzgD9EW+\n \ logger.go:42: 07:00:34 | watcher-tls-certs-change/1-deploy-with-tlse | HexV+da9g5Ix/I5gxku5bd0DWz/ovW+1o/uYMgB5ss9oqd8YKKSqkptXfAtoGToA\n \ logger.go:42: 07:00:34 | watcher-tls-certs-change/1-deploy-with-tlse | KG8QccenVwR7kEcCAwEAAaOBjzCBjDAOBgNVHQ8BAf8EBAMCBaAwEwYDVR0lBAww\n \ logger.go:42: 07:00:34 | watcher-tls-certs-change/1-deploy-with-tlse | CgYIKwYBBQUHAwEwDAYDVR0TAQH/BAIwADAfBgNVHSMEGDAWgBSy3xzAKuvmrfpX\n \ logger.go:42: 07:00:34 | watcher-tls-certs-change/1-deploy-with-tlse | EgmGJVm/ieLg8TA2BgNVHREBAf8ELDAqgih3YXRjaGVyLXB1YmxpYy53YXRjaGVy\n \ logger.go:42: 07:00:34 | watcher-tls-certs-change/1-deploy-with-tlse | LWt1dHRsLWRlZmF1bHQuc3ZjMA0GCSqGSIb3DQEBDAUAA4IBgQBmitaOVryjB1Fy\n \ logger.go:42: 07:00:34 | watcher-tls-certs-change/1-deploy-with-tlse | o+uQ7q1CfFEXu4uk57VffzLi6GFAot2u9gqfKm5Bx/JFKe0HvUHxfyum8oGhg8Mm\n \ logger.go:42: 07:00:34 | watcher-tls-certs-change/1-deploy-with-tlse | lF7bosLeepyWZeFYbXLZ3eCDrTyQRPPc8j6O8R9Ygg2wvIhwIrADK1/KTQuPVFW5\n \ logger.go:42: 07:00:34 | watcher-tls-certs-change/1-deploy-with-tlse | DVaC7MERErbmczp6DYiELdgcqlJimK3Vp/11qNtk1Kc7wVLI2Y36CzhwVMXOBW00\n \ logger.go:42: 07:00:34 | watcher-tls-certs-change/1-deploy-with-tlse | XXQxvYoWLH8WjhVMN9L0ZfTLdQIwL4gt0B3R7Uh8HR/qMCwW4RhIByG/el6wH1yh\n \ logger.go:42: 07:00:34 | watcher-tls-certs-change/1-deploy-with-tlse | D5gkY2Jkbf1vvmmw6MBLkx5O8aL145sW8fDKMGrmRj+eO5ALLeYMdmQZXqwZ4URw\n \ logger.go:42: 07:00:34 | watcher-tls-certs-change/1-deploy-with-tlse | X1tyjzN1ntrcBJf6xnuMLfT1v9RPMCT4CPeb23M5lqvwQXdrbiarkLQ9n/mBPUA6\n \ logger.go:42: 07:00:34 | watcher-tls-certs-change/1-deploy-with-tlse | lUYWBkFqYg9XnAgkWTeehQ7+AeOtvhj6gu3st6nEkOjJs9ri3Jris8kGL9Ay5yMy\n \ logger.go:42: 07:00:34 | watcher-tls-certs-change/1-deploy-with-tlse | af6ByIVzqJuVxr9GJiFIvhx+t2NEelQUiFnN1465Q67x/AELrfo=\n \ logger.go:42: 07:00:34 | watcher-tls-certs-change/1-deploy-with-tlse | -----END CERTIFICATE-----'\n logger.go:42: 07:00:34 | watcher-tls-certs-change/1-deploy-with-tlse | + '[' '-----BEGIN CERTIFICATE-----\n logger.go:42: 07:00:34 | watcher-tls-certs-change/1-deploy-with-tlse | MIIDsjCCAhqgAwIBAgIQFVNmGwlW4BZq0NMjqP97cTANBgkqhkiG9w0BAQwFADAY\n logger.go:42: 07:00:34 | watcher-tls-certs-change/1-deploy-with-tlse | MRYwFAYDVQQDEw1yb290Y2EtcHVibGljMB4XDTI2MDEyMjA3MDAwN1oXDTMxMDEy\n \ logger.go:42: 07:00:34 | watcher-tls-certs-change/1-deploy-with-tlse | MTA3MDAwN1owADCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAMpNXlNH\n \ logger.go:42: 07:00:34 | watcher-tls-certs-change/1-deploy-with-tlse | +PgDkPuiPtY2Iy+iOVTneCp31gtcOAI8JIp8fgCvk7KwdimyxIUQ2cYxsZPnG0cy\n \ logger.go:42: 07:00:34 | watcher-tls-certs-change/1-deploy-with-tlse | wL4XpOQoqp0ElzP3DPP1FUOAnx7ctKdash5DUzBT+84oA2Hab8yd3D9RWVXgYTYP\n \ logger.go:42: 07:00:34 | watcher-tls-certs-change/1-deploy-with-tlse | tgSvfD60T1dJeiiJa5kBj0Gfj+0gKtYNueYrOo65hlC6Smd2tP/7av+xLtQ2b1JS\n \ logger.go:42: 07:00:34 | watcher-tls-certs-change/1-deploy-with-tlse | tFAWNznSY+2n+8SyIH6jjDb/oBYfANPOIcfU/4X8dTGzLgkpMMXK3KyXfzgD9EW+\n \ logger.go:42: 07:00:34 | watcher-tls-certs-change/1-deploy-with-tlse | HexV+da9g5Ix/I5gxku5bd0DWz/ovW+1o/uYMgB5ss9oqd8YKKSqkptXfAtoGToA\n \ logger.go:42: 07:00:34 | watcher-tls-certs-change/1-deploy-with-tlse | KG8QccenVwR7kEcCAwEAAaOBjzCBjDAOBgNVHQ8BAf8EBAMCBaAwEwYDVR0lBAww\n \ logger.go:42: 07:00:34 | watcher-tls-certs-change/1-deploy-with-tlse | CgYIKwYBBQUHAwEwDAYDVR0TAQH/BAIwADAfBgNVHSMEGDAWgBSy3xzAKuvmrfpX\n \ logger.go:42: 07:00:34 | watcher-tls-certs-change/1-deploy-with-tlse | EgmGJVm/ieLg8TA2BgNVHREBAf8ELDAqgih3YXRjaGVyLXB1YmxpYy53YXRjaGVy\n \ logger.go:42: 07:00:34 | watcher-tls-certs-change/1-deploy-with-tlse | LWt1dHRsLWRlZmF1bHQuc3ZjMA0GCSqGSIb3DQEBDAUAA4IBgQBmitaOVryjB1Fy\n \ logger.go:42: 07:00:34 | watcher-tls-certs-change/1-deploy-with-tlse | o+uQ7q1CfFEXu4uk57VffzLi6GFAot2u9gqfKm5Bx/JFKe0HvUHxfyum8oGhg8Mm\n \ logger.go:42: 07:00:34 | watcher-tls-certs-change/1-deploy-with-tlse | lF7bosLeepyWZeFYbXLZ3eCDrTyQRPPc8j6O8R9Ygg2wvIhwIrADK1/KTQuPVFW5\n \ logger.go:42: 07:00:34 | watcher-tls-certs-change/1-deploy-with-tlse | DVaC7MERErbmczp6DYiELdgcqlJimK3Vp/11qNtk1Kc7wVLI2Y36CzhwVMXOBW00\n \ logger.go:42: 07:00:34 | watcher-tls-certs-change/1-deploy-with-tlse | XXQxvYoWLH8WjhVMN9L0ZfTLdQIwL4gt0B3R7Uh8HR/qMCwW4RhIByG/el6wH1yh\n \ logger.go:42: 07:00:34 | watcher-tls-certs-change/1-deploy-with-tlse | D5gkY2Jkbf1vvmmw6MBLkx5O8aL145sW8fDKMGrmRj+eO5ALLeYMdmQZXqwZ4URw\n \ logger.go:42: 07:00:34 | watcher-tls-certs-change/1-deploy-with-tlse | X1tyjzN1ntrcBJf6xnuMLfT1v9RPMCT4CPeb23M5lqvwQXdrbiarkLQ9n/mBPUA6\n \ logger.go:42: 07:00:34 | watcher-tls-certs-change/1-deploy-with-tlse | lUYWBkFqYg9XnAgkWTeehQ7+AeOtvhj6gu3st6nEkOjJs9ri3Jris8kGL9Ay5yMy\n \ logger.go:42: 07:00:34 | watcher-tls-certs-change/1-deploy-with-tlse | af6ByIVzqJuVxr9GJiFIvhx+t2NEelQUiFnN1465Q67x/AELrfo=\n \ logger.go:42: 07:00:34 | watcher-tls-certs-change/1-deploy-with-tlse | -----END CERTIFICATE-----' '!=' '-----BEGIN CERTIFICATE-----\n logger.go:42: 07:00:34 | watcher-tls-certs-change/1-deploy-with-tlse | MIIDsjCCAhqgAwIBAgIQFVNmGwlW4BZq0NMjqP97cTANBgkqhkiG9w0BAQwFADAY\n \ logger.go:42: 07:00:34 | watcher-tls-certs-change/1-deploy-with-tlse | MRYwFAYDVQQDEw1yb290Y2EtcHVibGljMB4XDTI2MDEyMjA3MDAwN1oXDTMxMDEy\n \ logger.go:42: 07:00:34 | watcher-tls-certs-change/1-deploy-with-tlse | MTA3MDAwN1owADCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAMpNXlNH\n \ logger.go:42: 07:00:34 | watcher-tls-certs-change/1-deploy-with-tlse | +PgDkPuiPtY2Iy+iOVTneCp31gtcOAI8JIp8fgCvk7KwdimyxIUQ2cYxsZPnG0cy\n \ logger.go:42: 07:00:34 | watcher-tls-certs-change/1-deploy-with-tlse | wL4XpOQoqp0ElzP3DPP1FUOAnx7ctKdash5DUzBT+84oA2Hab8yd3D9RWVXgYTYP\n \ logger.go:42: 07:00:34 | watcher-tls-certs-change/1-deploy-with-tlse | tgSvfD60T1dJeiiJa5kBj0Gfj+0gKtYNueYrOo65hlC6Smd2tP/7av+xLtQ2b1JS\n \ logger.go:42: 07:00:34 | watcher-tls-certs-change/1-deploy-with-tlse | tFAWNznSY+2n+8SyIH6jjDb/oBYfANPOIcfU/4X8dTGzLgkpMMXK3KyXfzgD9EW+\n \ logger.go:42: 07:00:34 | watcher-tls-certs-change/1-deploy-with-tlse | HexV+da9g5Ix/I5gxku5bd0DWz/ovW+1o/uYMgB5ss9oqd8YKKSqkptXfAtoGToA\n \ logger.go:42: 07:00:34 | watcher-tls-certs-change/1-deploy-with-tlse | KG8QccenVwR7kEcCAwEAAaOBjzCBjDAOBgNVHQ8BAf8EBAMCBaAwEwYDVR0lBAww\n \ logger.go:42: 07:00:34 | watcher-tls-certs-change/1-deploy-with-tlse | CgYIKwYBBQUHAwEwDAYDVR0TAQH/BAIwADAfBgNVHSMEGDAWgBSy3xzAKuvmrfpX\n \ logger.go:42: 07:00:34 | watcher-tls-certs-change/1-deploy-with-tlse | EgmGJVm/ieLg8TA2BgNVHREBAf8ELDAqgih3YXRjaGVyLXB1YmxpYy53YXRjaGVy\n \ logger.go:42: 07:00:34 | watcher-tls-certs-change/1-deploy-with-tlse | LWt1dHRsLWRlZmF1bHQuc3ZjMA0GCSqGSIb3DQEBDAUAA4IBgQBmitaOVryjB1Fy\n \ logger.go:42: 07:00:34 | watcher-tls-certs-change/1-deploy-with-tlse | o+uQ7q1CfFEXu4uk57VffzLi6GFAot2u9gqfKm5Bx/JFKe0HvUHxfyum8oGhg8Mm\n \ logger.go:42: 07:00:34 | watcher-tls-certs-change/1-deploy-with-tlse | lF7bosLeepyWZeFYbXLZ3eCDrTyQRPPc8j6O8R9Ygg2wvIhwIrADK1/KTQuPVFW5\n \ logger.go:42: 07:00:34 | watcher-tls-certs-change/1-deploy-with-tlse | DVaC7MERErbmczp6DYiELdgcqlJimK3Vp/11qNtk1Kc7wVLI2Y36CzhwVMXOBW00\n \ logger.go:42: 07:00:34 | watcher-tls-certs-change/1-deploy-with-tlse | XXQxvYoWLH8WjhVMN9L0ZfTLdQIwL4gt0B3R7Uh8HR/qMCwW4RhIByG/el6wH1yh\n \ logger.go:42: 07:00:34 | watcher-tls-certs-change/1-deploy-with-tlse | D5gkY2Jkbf1vvmmw6MBLkx5O8aL145sW8fDKMGrmRj+eO5ALLeYMdmQZXqwZ4URw\n \ logger.go:42: 07:00:34 | watcher-tls-certs-change/1-deploy-with-tlse | X1tyjzN1ntrcBJf6xnuMLfT1v9RPMCT4CPeb23M5lqvwQXdrbiarkLQ9n/mBPUA6\n \ logger.go:42: 07:00:34 | watcher-tls-certs-change/1-deploy-with-tlse | lUYWBkFqYg9XnAgkWTeehQ7+AeOtvhj6gu3st6nEkOjJs9ri3Jris8kGL9Ay5yMy\n \ logger.go:42: 07:00:34 | watcher-tls-certs-change/1-deploy-with-tlse | af6ByIVzqJuVxr9GJiFIvhx+t2NEelQUiFnN1465Q67x/AELrfo=\n \ logger.go:42: 07:00:34 | watcher-tls-certs-change/1-deploy-with-tlse | -----END CERTIFICATE-----' ']'\n logger.go:42: 07:00:34 | watcher-tls-certs-change/1-deploy-with-tlse | ++ oc rsh -n watcher-kuttl-default -c watcher-api watcher-kuttl-api-0 cat /etc/pki/tls/certs/internal.crt\n \ logger.go:42: 07:00:35 | watcher-tls-certs-change/1-deploy-with-tlse | + internal_svc_cert='-----BEGIN CERTIFICATE-----\n logger.go:42: 07:00:35 | watcher-tls-certs-change/1-deploy-with-tlse | MIIDtzCCAh+gAwIBAgIRAMdtA66/UVErgQ6jB2Mo7g8wDQYJKoZIhvcNAQEMBQAw\n logger.go:42: 07:00:35 | watcher-tls-certs-change/1-deploy-with-tlse | GjEYMBYGA1UEAxMPcm9vdGNhLWludGVybmFsMB4XDTI2MDEyMjA3MDAwN1oXDTMx\n \ logger.go:42: 07:00:35 | watcher-tls-certs-change/1-deploy-with-tlse | MDEyMTA3MDAwN1owADCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAOzY\n \ logger.go:42: 07:00:35 | watcher-tls-certs-change/1-deploy-with-tlse | hUVeRLlVB60sJbdpaQAJI2vpbeEEDa91MsWoYlWF58nvtb17k8KGvpPNemDF4UvD\n \ logger.go:42: 07:00:35 | watcher-tls-certs-change/1-deploy-with-tlse | kRm+oSlbUhWwlbwI1Q3VFz4JhvU12PvdUhV4e5PbNGULUDGpAbr04w098HEQSUj+\n \ logger.go:42: 07:00:35 | watcher-tls-certs-change/1-deploy-with-tlse | D/44rQFA0PbvLa//wISJcF6NG2kZiiAFAlrfYJH5tJM2nbdDOvHRY2FScNwLw3aE\n \ logger.go:42: 07:00:35 | watcher-tls-certs-change/1-deploy-with-tlse | 1UGIp9tmMA1on0l+9tBcndNzSq8klq5TdwDo8Pwrqz8m37ZXvwD1gizzDXwZhiPN\n \ logger.go:42: 07:00:35 | watcher-tls-certs-change/1-deploy-with-tlse | KIvLKp0hAxvswvj5Rib80zGQX6JjJEcOBFZ4GT/e8WUjrsJ6JsTygAtF9Y8xwchO\n \ logger.go:42: 07:00:35 | watcher-tls-certs-change/1-deploy-with-tlse | Ce4xdKeeGZ8hDK2ojFcCAwEAAaOBkTCBjjAOBgNVHQ8BAf8EBAMCBaAwEwYDVR0l\n \ logger.go:42: 07:00:35 | watcher-tls-certs-change/1-deploy-with-tlse | BAwwCgYIKwYBBQUHAwEwDAYDVR0TAQH/BAIwADAfBgNVHSMEGDAWgBSEvgxWYqXo\n \ logger.go:42: 07:00:35 | watcher-tls-certs-change/1-deploy-with-tlse | VaasRUQLJzZAsO/HJTA4BgNVHREBAf8ELjAsgip3YXRjaGVyLWludGVybmFsLndh\n \ logger.go:42: 07:00:35 | watcher-tls-certs-change/1-deploy-with-tlse | dGNoZXIta3V0dGwtZGVmYXVsdC5zdmMwDQYJKoZIhvcNAQEMBQADggGBAJzAl8kI\n \ logger.go:42: 07:00:35 | watcher-tls-certs-change/1-deploy-with-tlse | fRCmnFBnPIq9+3XfefG0pybCCtUusBuoATJYDHmH/379Rb4zeN6bqXg7PP36qVao\n \ logger.go:42: 07:00:35 | watcher-tls-certs-change/1-deploy-with-tlse | 98lAd5LFEieOBPJq2+QXo8CWrLcbItzbOHdFeVStfpuQqaD2OMYLLepP/IpAKT+0\n \ logger.go:42: 07:00:35 | watcher-tls-certs-change/1-deploy-with-tlse | +GEHqjJsaHF3xj6g5HcvQzhSaQh3Pg8ffKmqJKQg95PF1BPQ110MawKYh9MBibPr\n \ logger.go:42: 07:00:35 | watcher-tls-certs-change/1-deploy-with-tlse | b0myrALIel2CeIeWqm+VMrMI8en/gFxJWTfg58GcQbRiUDzWa2BHE1mEoXmOLNDt\n \ logger.go:42: 07:00:35 | watcher-tls-certs-change/1-deploy-with-tlse | L1iYWnFugdfAycs2+uAkPMbpav3Oi3WJX1N1iKRklNv5Cq8oQ++b8aoR2RZPnJJE\n \ logger.go:42: 07:00:35 | watcher-tls-certs-change/1-deploy-with-tlse | ppmkEBU4Kt94NO5B/A2PnjkM8+yOQjhjVnyD5Ccp8kHx0MN7H1OW9J0m4JEPjP/c\n \ logger.go:42: 07:00:35 | watcher-tls-certs-change/1-deploy-with-tlse | uRTxWb4yoJl6Igdfy3tYgldteKztKgf4tDlkR/Htsf8GVL+zbsnTseYY5BakyygP\n \ logger.go:42: 07:00:35 | watcher-tls-certs-change/1-deploy-with-tlse | AHnxS0D1QCA5Zv5craVd+a2ZCNvbnSdl4pwShL796A0cNZGnAKDD1LwcYQ==\n \ logger.go:42: 07:00:35 | watcher-tls-certs-change/1-deploy-with-tlse | -----END CERTIFICATE-----'\n logger.go:42: 07:00:35 | watcher-tls-certs-change/1-deploy-with-tlse | ++ oc get -n watcher-kuttl-default secret cert-watcher-internal-svc -o 'jsonpath={.data.tls\\.crt}'\n \ logger.go:42: 07:00:35 | watcher-tls-certs-change/1-deploy-with-tlse | ++ base64 --decode\n logger.go:42: 07:00:35 | watcher-tls-certs-change/1-deploy-with-tlse | + internal_secret_cert='-----BEGIN CERTIFICATE-----\n logger.go:42: 07:00:35 | watcher-tls-certs-change/1-deploy-with-tlse | MIIDtzCCAh+gAwIBAgIRAMdtA66/UVErgQ6jB2Mo7g8wDQYJKoZIhvcNAQEMBQAw\n \ logger.go:42: 07:00:35 | watcher-tls-certs-change/1-deploy-with-tlse | GjEYMBYGA1UEAxMPcm9vdGNhLWludGVybmFsMB4XDTI2MDEyMjA3MDAwN1oXDTMx\n \ logger.go:42: 07:00:35 | watcher-tls-certs-change/1-deploy-with-tlse | MDEyMTA3MDAwN1owADCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAOzY\n \ logger.go:42: 07:00:35 | watcher-tls-certs-change/1-deploy-with-tlse | hUVeRLlVB60sJbdpaQAJI2vpbeEEDa91MsWoYlWF58nvtb17k8KGvpPNemDF4UvD\n \ logger.go:42: 07:00:35 | watcher-tls-certs-change/1-deploy-with-tlse | kRm+oSlbUhWwlbwI1Q3VFz4JhvU12PvdUhV4e5PbNGULUDGpAbr04w098HEQSUj+\n \ logger.go:42: 07:00:35 | watcher-tls-certs-change/1-deploy-with-tlse | D/44rQFA0PbvLa//wISJcF6NG2kZiiAFAlrfYJH5tJM2nbdDOvHRY2FScNwLw3aE\n \ logger.go:42: 07:00:35 | watcher-tls-certs-change/1-deploy-with-tlse | 1UGIp9tmMA1on0l+9tBcndNzSq8klq5TdwDo8Pwrqz8m37ZXvwD1gizzDXwZhiPN\n \ logger.go:42: 07:00:35 | watcher-tls-certs-change/1-deploy-with-tlse | KIvLKp0hAxvswvj5Rib80zGQX6JjJEcOBFZ4GT/e8WUjrsJ6JsTygAtF9Y8xwchO\n \ logger.go:42: 07:00:35 | watcher-tls-certs-change/1-deploy-with-tlse | Ce4xdKeeGZ8hDK2ojFcCAwEAAaOBkTCBjjAOBgNVHQ8BAf8EBAMCBaAwEwYDVR0l\n \ logger.go:42: 07:00:35 | watcher-tls-certs-change/1-deploy-with-tlse | BAwwCgYIKwYBBQUHAwEwDAYDVR0TAQH/BAIwADAfBgNVHSMEGDAWgBSEvgxWYqXo\n \ logger.go:42: 07:00:35 | watcher-tls-certs-change/1-deploy-with-tlse | VaasRUQLJzZAsO/HJTA4BgNVHREBAf8ELjAsgip3YXRjaGVyLWludGVybmFsLndh\n \ logger.go:42: 07:00:35 | watcher-tls-certs-change/1-deploy-with-tlse | dGNoZXIta3V0dGwtZGVmYXVsdC5zdmMwDQYJKoZIhvcNAQEMBQADggGBAJzAl8kI\n \ logger.go:42: 07:00:35 | watcher-tls-certs-change/1-deploy-with-tlse | fRCmnFBnPIq9+3XfefG0pybCCtUusBuoATJYDHmH/379Rb4zeN6bqXg7PP36qVao\n \ logger.go:42: 07:00:35 | watcher-tls-certs-change/1-deploy-with-tlse | 98lAd5LFEieOBPJq2+QXo8CWrLcbItzbOHdFeVStfpuQqaD2OMYLLepP/IpAKT+0\n \ logger.go:42: 07:00:35 | watcher-tls-certs-change/1-deploy-with-tlse | +GEHqjJsaHF3xj6g5HcvQzhSaQh3Pg8ffKmqJKQg95PF1BPQ110MawKYh9MBibPr\n \ logger.go:42: 07:00:35 | watcher-tls-certs-change/1-deploy-with-tlse | b0myrALIel2CeIeWqm+VMrMI8en/gFxJWTfg58GcQbRiUDzWa2BHE1mEoXmOLNDt\n \ logger.go:42: 07:00:35 | watcher-tls-certs-change/1-deploy-with-tlse | L1iYWnFugdfAycs2+uAkPMbpav3Oi3WJX1N1iKRklNv5Cq8oQ++b8aoR2RZPnJJE\n \ logger.go:42: 07:00:35 | watcher-tls-certs-change/1-deploy-with-tlse | ppmkEBU4Kt94NO5B/A2PnjkM8+yOQjhjVnyD5Ccp8kHx0MN7H1OW9J0m4JEPjP/c\n \ logger.go:42: 07:00:35 | watcher-tls-certs-change/1-deploy-with-tlse | uRTxWb4yoJl6Igdfy3tYgldteKztKgf4tDlkR/Htsf8GVL+zbsnTseYY5BakyygP\n \ logger.go:42: 07:00:35 | watcher-tls-certs-change/1-deploy-with-tlse | AHnxS0D1QCA5Zv5craVd+a2ZCNvbnSdl4pwShL796A0cNZGnAKDD1LwcYQ==\n \ logger.go:42: 07:00:35 | watcher-tls-certs-change/1-deploy-with-tlse | -----END CERTIFICATE-----'\n logger.go:42: 07:00:35 | watcher-tls-certs-change/1-deploy-with-tlse | + '[' '-----BEGIN CERTIFICATE-----\n logger.go:42: 07:00:35 | watcher-tls-certs-change/1-deploy-with-tlse | MIIDtzCCAh+gAwIBAgIRAMdtA66/UVErgQ6jB2Mo7g8wDQYJKoZIhvcNAQEMBQAw\n logger.go:42: 07:00:35 | watcher-tls-certs-change/1-deploy-with-tlse | GjEYMBYGA1UEAxMPcm9vdGNhLWludGVybmFsMB4XDTI2MDEyMjA3MDAwN1oXDTMx\n \ logger.go:42: 07:00:35 | watcher-tls-certs-change/1-deploy-with-tlse | MDEyMTA3MDAwN1owADCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAOzY\n \ logger.go:42: 07:00:35 | watcher-tls-certs-change/1-deploy-with-tlse | hUVeRLlVB60sJbdpaQAJI2vpbeEEDa91MsWoYlWF58nvtb17k8KGvpPNemDF4UvD\n \ logger.go:42: 07:00:35 | watcher-tls-certs-change/1-deploy-with-tlse | kRm+oSlbUhWwlbwI1Q3VFz4JhvU12PvdUhV4e5PbNGULUDGpAbr04w098HEQSUj+\n \ logger.go:42: 07:00:35 | watcher-tls-certs-change/1-deploy-with-tlse | D/44rQFA0PbvLa//wISJcF6NG2kZiiAFAlrfYJH5tJM2nbdDOvHRY2FScNwLw3aE\n \ logger.go:42: 07:00:35 | watcher-tls-certs-change/1-deploy-with-tlse | 1UGIp9tmMA1on0l+9tBcndNzSq8klq5TdwDo8Pwrqz8m37ZXvwD1gizzDXwZhiPN\n \ logger.go:42: 07:00:35 | watcher-tls-certs-change/1-deploy-with-tlse | KIvLKp0hAxvswvj5Rib80zGQX6JjJEcOBFZ4GT/e8WUjrsJ6JsTygAtF9Y8xwchO\n \ logger.go:42: 07:00:35 | watcher-tls-certs-change/1-deploy-with-tlse | Ce4xdKeeGZ8hDK2ojFcCAwEAAaOBkTCBjjAOBgNVHQ8BAf8EBAMCBaAwEwYDVR0l\n \ logger.go:42: 07:00:35 | watcher-tls-certs-change/1-deploy-with-tlse | BAwwCgYIKwYBBQUHAwEwDAYDVR0TAQH/BAIwADAfBgNVHSMEGDAWgBSEvgxWYqXo\n \ logger.go:42: 07:00:35 | watcher-tls-certs-change/1-deploy-with-tlse | VaasRUQLJzZAsO/HJTA4BgNVHREBAf8ELjAsgip3YXRjaGVyLWludGVybmFsLndh\n \ logger.go:42: 07:00:35 | watcher-tls-certs-change/1-deploy-with-tlse | dGNoZXIta3V0dGwtZGVmYXVsdC5zdmMwDQYJKoZIhvcNAQEMBQADggGBAJzAl8kI\n \ logger.go:42: 07:00:35 | watcher-tls-certs-change/1-deploy-with-tlse | fRCmnFBnPIq9+3XfefG0pybCCtUusBuoATJYDHmH/379Rb4zeN6bqXg7PP36qVao\n \ logger.go:42: 07:00:35 | watcher-tls-certs-change/1-deploy-with-tlse | 98lAd5LFEieOBPJq2+QXo8CWrLcbItzbOHdFeVStfpuQqaD2OMYLLepP/IpAKT+0\n \ logger.go:42: 07:00:35 | watcher-tls-certs-change/1-deploy-with-tlse | +GEHqjJsaHF3xj6g5HcvQzhSaQh3Pg8ffKmqJKQg95PF1BPQ110MawKYh9MBibPr\n \ logger.go:42: 07:00:35 | watcher-tls-certs-change/1-deploy-with-tlse | b0myrALIel2CeIeWqm+VMrMI8en/gFxJWTfg58GcQbRiUDzWa2BHE1mEoXmOLNDt\n \ logger.go:42: 07:00:35 | watcher-tls-certs-change/1-deploy-with-tlse | L1iYWnFugdfAycs2+uAkPMbpav3Oi3WJX1N1iKRklNv5Cq8oQ++b8aoR2RZPnJJE\n \ logger.go:42: 07:00:35 | watcher-tls-certs-change/1-deploy-with-tlse | ppmkEBU4Kt94NO5B/A2PnjkM8+yOQjhjVnyD5Ccp8kHx0MN7H1OW9J0m4JEPjP/c\n \ logger.go:42: 07:00:35 | watcher-tls-certs-change/1-deploy-with-tlse | uRTxWb4yoJl6Igdfy3tYgldteKztKgf4tDlkR/Htsf8GVL+zbsnTseYY5BakyygP\n \ logger.go:42: 07:00:35 | watcher-tls-certs-change/1-deploy-with-tlse | AHnxS0D1QCA5Zv5craVd+a2ZCNvbnSdl4pwShL796A0cNZGnAKDD1LwcYQ==\n \ logger.go:42: 07:00:35 | watcher-tls-certs-change/1-deploy-with-tlse | -----END CERTIFICATE-----' '!=' '-----BEGIN CERTIFICATE-----\n logger.go:42: 07:00:35 | watcher-tls-certs-change/1-deploy-with-tlse | MIIDtzCCAh+gAwIBAgIRAMdtA66/UVErgQ6jB2Mo7g8wDQYJKoZIhvcNAQEMBQAw\n \ logger.go:42: 07:00:35 | watcher-tls-certs-change/1-deploy-with-tlse | GjEYMBYGA1UEAxMPcm9vdGNhLWludGVybmFsMB4XDTI2MDEyMjA3MDAwN1oXDTMx\n \ logger.go:42: 07:00:35 | watcher-tls-certs-change/1-deploy-with-tlse | MDEyMTA3MDAwN1owADCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAOzY\n \ logger.go:42: 07:00:35 | watcher-tls-certs-change/1-deploy-with-tlse | hUVeRLlVB60sJbdpaQAJI2vpbeEEDa91MsWoYlWF58nvtb17k8KGvpPNemDF4UvD\n \ logger.go:42: 07:00:35 | watcher-tls-certs-change/1-deploy-with-tlse | kRm+oSlbUhWwlbwI1Q3VFz4JhvU12PvdUhV4e5PbNGULUDGpAbr04w098HEQSUj+\n \ logger.go:42: 07:00:35 | watcher-tls-certs-change/1-deploy-with-tlse | D/44rQFA0PbvLa//wISJcF6NG2kZiiAFAlrfYJH5tJM2nbdDOvHRY2FScNwLw3aE\n \ logger.go:42: 07:00:35 | watcher-tls-certs-change/1-deploy-with-tlse | 1UGIp9tmMA1on0l+9tBcndNzSq8klq5TdwDo8Pwrqz8m37ZXvwD1gizzDXwZhiPN\n \ logger.go:42: 07:00:35 | watcher-tls-certs-change/1-deploy-with-tlse | KIvLKp0hAxvswvj5Rib80zGQX6JjJEcOBFZ4GT/e8WUjrsJ6JsTygAtF9Y8xwchO\n \ logger.go:42: 07:00:35 | watcher-tls-certs-change/1-deploy-with-tlse | Ce4xdKeeGZ8hDK2ojFcCAwEAAaOBkTCBjjAOBgNVHQ8BAf8EBAMCBaAwEwYDVR0l\n \ logger.go:42: 07:00:35 | watcher-tls-certs-change/1-deploy-with-tlse | BAwwCgYIKwYBBQUHAwEwDAYDVR0TAQH/BAIwADAfBgNVHSMEGDAWgBSEvgxWYqXo\n \ logger.go:42: 07:00:35 | watcher-tls-certs-change/1-deploy-with-tlse | VaasRUQLJzZAsO/HJTA4BgNVHREBAf8ELjAsgip3YXRjaGVyLWludGVybmFsLndh\n \ logger.go:42: 07:00:35 | watcher-tls-certs-change/1-deploy-with-tlse | dGNoZXIta3V0dGwtZGVmYXVsdC5zdmMwDQYJKoZIhvcNAQEMBQADggGBAJzAl8kI\n \ logger.go:42: 07:00:35 | watcher-tls-certs-change/1-deploy-with-tlse | fRCmnFBnPIq9+3XfefG0pybCCtUusBuoATJYDHmH/379Rb4zeN6bqXg7PP36qVao\n \ logger.go:42: 07:00:35 | watcher-tls-certs-change/1-deploy-with-tlse | 98lAd5LFEieOBPJq2+QXo8CWrLcbItzbOHdFeVStfpuQqaD2OMYLLepP/IpAKT+0\n \ logger.go:42: 07:00:35 | watcher-tls-certs-change/1-deploy-with-tlse | +GEHqjJsaHF3xj6g5HcvQzhSaQh3Pg8ffKmqJKQg95PF1BPQ110MawKYh9MBibPr\n \ logger.go:42: 07:00:35 | watcher-tls-certs-change/1-deploy-with-tlse | b0myrALIel2CeIeWqm+VMrMI8en/gFxJWTfg58GcQbRiUDzWa2BHE1mEoXmOLNDt\n \ logger.go:42: 07:00:35 | watcher-tls-certs-change/1-deploy-with-tlse | L1iYWnFugdfAycs2+uAkPMbpav3Oi3WJX1N1iKRklNv5Cq8oQ++b8aoR2RZPnJJE\n \ logger.go:42: 07:00:35 | watcher-tls-certs-change/1-deploy-with-tlse | ppmkEBU4Kt94NO5B/A2PnjkM8+yOQjhjVnyD5Ccp8kHx0MN7H1OW9J0m4JEPjP/c\n \ logger.go:42: 07:00:35 | watcher-tls-certs-change/1-deploy-with-tlse | uRTxWb4yoJl6Igdfy3tYgldteKztKgf4tDlkR/Htsf8GVL+zbsnTseYY5BakyygP\n \ logger.go:42: 07:00:35 | watcher-tls-certs-change/1-deploy-with-tlse | AHnxS0D1QCA5Zv5craVd+a2ZCNvbnSdl4pwShL796A0cNZGnAKDD1LwcYQ==\n \ logger.go:42: 07:00:35 | watcher-tls-certs-change/1-deploy-with-tlse | -----END CERTIFICATE-----' ']'\n logger.go:42: 07:00:35 | watcher-tls-certs-change/1-deploy-with-tlse | test step completed 1-deploy-with-tlse\n logger.go:42: 07:00:35 | watcher-tls-certs-change/2-change-public-svc-certificate | starting test step 2-change-public-svc-certificate\n logger.go:42: 07:00:35 | watcher-tls-certs-change/2-change-public-svc-certificate | running command: [sh -c oc patch Certificate -n $NAMESPACE watcher-public-svc --type='json' -p='[{\"op\": \"replace\", \"path\": \"/spec/dnsNames\", \"value\":['watcher-public.watcher-kuttl-default.svc', 'watcher-public.watcher-kuttl-default.svc.cluster.local']}]'\n ]\n logger.go:42: 07:00:35 | watcher-tls-certs-change/2-change-public-svc-certificate | certificate.cert-manager.io/watcher-public-svc patched\n logger.go:42: 07:00:35 | watcher-tls-certs-change/2-change-public-svc-certificate | running command: [sh -c set -euxo pipefail\n svc_cert=$(oc rsh -n $NAMESPACE -c watcher-api watcher-kuttl-api-0 cat /etc/pki/tls/certs/public.crt)\n secret_cert=$(oc get -n $NAMESPACE secret cert-watcher-public-svc -o jsonpath='{.data.tls\\.crt}' | base64 --decode)\n # ensure that the svc secret and cert secret match\n \ if [ \"${svc_cert}\" != \"${secret_cert}\" ]; then\n exit 1\n fi\n \ ]\n logger.go:42: 07:00:35 | watcher-tls-certs-change/2-change-public-svc-certificate | ++ oc rsh -n watcher-kuttl-default -c watcher-api watcher-kuttl-api-0 cat /etc/pki/tls/certs/public.crt\n \ logger.go:42: 07:00:35 | watcher-tls-certs-change/2-change-public-svc-certificate | + svc_cert='-----BEGIN CERTIFICATE-----\n logger.go:42: 07:00:35 | watcher-tls-certs-change/2-change-public-svc-certificate | MIIDsjCCAhqgAwIBAgIQFVNmGwlW4BZq0NMjqP97cTANBgkqhkiG9w0BAQwFADAY\n logger.go:42: 07:00:35 | watcher-tls-certs-change/2-change-public-svc-certificate | MRYwFAYDVQQDEw1yb290Y2EtcHVibGljMB4XDTI2MDEyMjA3MDAwN1oXDTMxMDEy\n \ logger.go:42: 07:00:35 | watcher-tls-certs-change/2-change-public-svc-certificate | MTA3MDAwN1owADCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAMpNXlNH\n logger.go:42: 07:00:35 | watcher-tls-certs-change/2-change-public-svc-certificate | +PgDkPuiPtY2Iy+iOVTneCp31gtcOAI8JIp8fgCvk7KwdimyxIUQ2cYxsZPnG0cy\n \ logger.go:42: 07:00:35 | watcher-tls-certs-change/2-change-public-svc-certificate | wL4XpOQoqp0ElzP3DPP1FUOAnx7ctKdash5DUzBT+84oA2Hab8yd3D9RWVXgYTYP\n logger.go:42: 07:00:35 | watcher-tls-certs-change/2-change-public-svc-certificate | tgSvfD60T1dJeiiJa5kBj0Gfj+0gKtYNueYrOo65hlC6Smd2tP/7av+xLtQ2b1JS\n \ logger.go:42: 07:00:35 | watcher-tls-certs-change/2-change-public-svc-certificate | tFAWNznSY+2n+8SyIH6jjDb/oBYfANPOIcfU/4X8dTGzLgkpMMXK3KyXfzgD9EW+\n logger.go:42: 07:00:35 | watcher-tls-certs-change/2-change-public-svc-certificate | HexV+da9g5Ix/I5gxku5bd0DWz/ovW+1o/uYMgB5ss9oqd8YKKSqkptXfAtoGToA\n \ logger.go:42: 07:00:35 | watcher-tls-certs-change/2-change-public-svc-certificate | KG8QccenVwR7kEcCAwEAAaOBjzCBjDAOBgNVHQ8BAf8EBAMCBaAwEwYDVR0lBAww\n logger.go:42: 07:00:35 | watcher-tls-certs-change/2-change-public-svc-certificate | CgYIKwYBBQUHAwEwDAYDVR0TAQH/BAIwADAfBgNVHSMEGDAWgBSy3xzAKuvmrfpX\n \ logger.go:42: 07:00:35 | watcher-tls-certs-change/2-change-public-svc-certificate | EgmGJVm/ieLg8TA2BgNVHREBAf8ELDAqgih3YXRjaGVyLXB1YmxpYy53YXRjaGVy\n logger.go:42: 07:00:35 | watcher-tls-certs-change/2-change-public-svc-certificate | LWt1dHRsLWRlZmF1bHQuc3ZjMA0GCSqGSIb3DQEBDAUAA4IBgQBmitaOVryjB1Fy\n \ logger.go:42: 07:00:35 | watcher-tls-certs-change/2-change-public-svc-certificate | o+uQ7q1CfFEXu4uk57VffzLi6GFAot2u9gqfKm5Bx/JFKe0HvUHxfyum8oGhg8Mm\n logger.go:42: 07:00:35 | watcher-tls-certs-change/2-change-public-svc-certificate | lF7bosLeepyWZeFYbXLZ3eCDrTyQRPPc8j6O8R9Ygg2wvIhwIrADK1/KTQuPVFW5\n \ logger.go:42: 07:00:35 | watcher-tls-certs-change/2-change-public-svc-certificate | DVaC7MERErbmczp6DYiELdgcqlJimK3Vp/11qNtk1Kc7wVLI2Y36CzhwVMXOBW00\n logger.go:42: 07:00:35 | watcher-tls-certs-change/2-change-public-svc-certificate | XXQxvYoWLH8WjhVMN9L0ZfTLdQIwL4gt0B3R7Uh8HR/qMCwW4RhIByG/el6wH1yh\n \ logger.go:42: 07:00:35 | watcher-tls-certs-change/2-change-public-svc-certificate | D5gkY2Jkbf1vvmmw6MBLkx5O8aL145sW8fDKMGrmRj+eO5ALLeYMdmQZXqwZ4URw\n logger.go:42: 07:00:35 | watcher-tls-certs-change/2-change-public-svc-certificate | X1tyjzN1ntrcBJf6xnuMLfT1v9RPMCT4CPeb23M5lqvwQXdrbiarkLQ9n/mBPUA6\n \ logger.go:42: 07:00:35 | watcher-tls-certs-change/2-change-public-svc-certificate | lUYWBkFqYg9XnAgkWTeehQ7+AeOtvhj6gu3st6nEkOjJs9ri3Jris8kGL9Ay5yMy\n logger.go:42: 07:00:35 | watcher-tls-certs-change/2-change-public-svc-certificate | af6ByIVzqJuVxr9GJiFIvhx+t2NEelQUiFnN1465Q67x/AELrfo=\n \ logger.go:42: 07:00:35 | watcher-tls-certs-change/2-change-public-svc-certificate | -----END CERTIFICATE-----'\n logger.go:42: 07:00:35 | watcher-tls-certs-change/2-change-public-svc-certificate | ++ oc get -n watcher-kuttl-default secret cert-watcher-public-svc -o 'jsonpath={.data.tls\\.crt}'\n \ logger.go:42: 07:00:35 | watcher-tls-certs-change/2-change-public-svc-certificate | ++ base64 --decode\n logger.go:42: 07:00:35 | watcher-tls-certs-change/2-change-public-svc-certificate | + secret_cert='-----BEGIN CERTIFICATE-----\n logger.go:42: 07:00:35 | watcher-tls-certs-change/2-change-public-svc-certificate | MIID6jCCAlKgAwIBAgIQaET3jSKb03pJAQEOx1sP8jANBgkqhkiG9w0BAQwFADAY\n logger.go:42: 07:00:35 | watcher-tls-certs-change/2-change-public-svc-certificate | MRYwFAYDVQQDEw1yb290Y2EtcHVibGljMB4XDTI2MDEyMjA3MDAzNVoXDTMxMDEy\n \ logger.go:42: 07:00:35 | watcher-tls-certs-change/2-change-public-svc-certificate | MTA3MDAzNVowADCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAMUO91uC\n logger.go:42: 07:00:35 | watcher-tls-certs-change/2-change-public-svc-certificate | WtApJP220eftPqzZX3iJmtM6v4caj75kfMa8Re/FN4CY0xX684UKfFkl5wGSnbEA\n \ logger.go:42: 07:00:35 | watcher-tls-certs-change/2-change-public-svc-certificate | nB6ivGgBkvlfLhUaru5eP2XnPTnbN54YF5mW2xWMO5nGZ4k07Q/FWqSeLhZSf6yY\n logger.go:42: 07:00:35 | watcher-tls-certs-change/2-change-public-svc-certificate | c7Ifn/QckjDTU4J0B3W5MOlSHZbyV4e5BcBloR2akGH2R74SQTx/v4BQbCRzeGGl\n \ logger.go:42: 07:00:35 | watcher-tls-certs-change/2-change-public-svc-certificate | adfMoWrCWUBRsSxhG/evB0XoWp0xeiOQoAndabr4auk1XzBqQyLwenOU+CivlIC7\n logger.go:42: 07:00:35 | watcher-tls-certs-change/2-change-public-svc-certificate | +81MamYtHDXZdqly0FQCjSx5l3BzAOQRsAgHbYp//gzb28KOdevORaT2xe/EQuda\n \ logger.go:42: 07:00:35 | watcher-tls-certs-change/2-change-public-svc-certificate | gqgHYmSQTwJ4EscCAwEAAaOBxzCBxDAOBgNVHQ8BAf8EBAMCBaAwEwYDVR0lBAww\n logger.go:42: 07:00:35 | watcher-tls-certs-change/2-change-public-svc-certificate | CgYIKwYBBQUHAwEwDAYDVR0TAQH/BAIwADAfBgNVHSMEGDAWgBSy3xzAKuvmrfpX\n \ logger.go:42: 07:00:35 | watcher-tls-certs-change/2-change-public-svc-certificate | EgmGJVm/ieLg8TBuBgNVHREBAf8EZDBigih3YXRjaGVyLXB1YmxpYy53YXRjaGVy\n logger.go:42: 07:00:35 | watcher-tls-certs-change/2-change-public-svc-certificate | LWt1dHRsLWRlZmF1bHQuc3ZjgjZ3YXRjaGVyLXB1YmxpYy53YXRjaGVyLWt1dHRs\n \ logger.go:42: 07:00:35 | watcher-tls-certs-change/2-change-public-svc-certificate | LWRlZmF1bHQuc3ZjLmNsdXN0ZXIubG9jYWwwDQYJKoZIhvcNAQEMBQADggGBACpS\n logger.go:42: 07:00:35 | watcher-tls-certs-change/2-change-public-svc-certificate | JbVH3n79slClC6DdMUQHLtnvKYoL33kgMsiwf3P6qByavtbQn8lPJyD1m/xsrRGs\n \ logger.go:42: 07:00:35 | watcher-tls-certs-change/2-change-public-svc-certificate | 3uBsjnZwn/rMyTOFYv9sxO/5PkOiTHmsCW0Q9jkMzI/WycjZCnAlMX9+LRj7Dl8e\n logger.go:42: 07:00:35 | watcher-tls-certs-change/2-change-public-svc-certificate | d6eyo3wukodG1taKDTVGqlmuv2mGd6Xf506cK2/mf/w556quUaYi9N2e4YY1BNly\n \ logger.go:42: 07:00:35 | watcher-tls-certs-change/2-change-public-svc-certificate | ynm3VjHEyuigghb7wbMzwo+FhmxyYBVYlekM7bjhGbxmBycVFc5WvkSdV8MqGksj\n logger.go:42: 07:00:35 | watcher-tls-certs-change/2-change-public-svc-certificate | tjF7CNFK+/KxETpnxRQCnAXEsTjjHnc2yc35qR3GdT3353LSejPl3HONjc+IfQCC\n \ logger.go:42: 07:00:35 | watcher-tls-certs-change/2-change-public-svc-certificate | grgIUD2qGizLxPAfdSCnaZZKjryCJZ0+CuAI1K8RsxvLn3DT7H8GQ74ZnA8X9l7e\n logger.go:42: 07:00:35 | watcher-tls-certs-change/2-change-public-svc-certificate | u37MJKf/IEW0RK82TTgxAMZpZKgCnz47NTqXlYPZUnvX85DxWuCfakyvObVyFIif\n \ logger.go:42: 07:00:35 | watcher-tls-certs-change/2-change-public-svc-certificate | OsRjhko6IWIMauEaPp2blS9MFbTOAMCM2NzpXGE9gyJxekasxuv5Ifq3TBk/oA==\n logger.go:42: 07:00:35 | watcher-tls-certs-change/2-change-public-svc-certificate | -----END CERTIFICATE-----'\n \ logger.go:42: 07:00:35 | watcher-tls-certs-change/2-change-public-svc-certificate | + '[' '-----BEGIN CERTIFICATE-----\n logger.go:42: 07:00:35 | watcher-tls-certs-change/2-change-public-svc-certificate | MIIDsjCCAhqgAwIBAgIQFVNmGwlW4BZq0NMjqP97cTANBgkqhkiG9w0BAQwFADAY\n logger.go:42: 07:00:35 | watcher-tls-certs-change/2-change-public-svc-certificate | MRYwFAYDVQQDEw1yb290Y2EtcHVibGljMB4XDTI2MDEyMjA3MDAwN1oXDTMxMDEy\n \ logger.go:42: 07:00:35 | watcher-tls-certs-change/2-change-public-svc-certificate | MTA3MDAwN1owADCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAMpNXlNH\n logger.go:42: 07:00:35 | watcher-tls-certs-change/2-change-public-svc-certificate | +PgDkPuiPtY2Iy+iOVTneCp31gtcOAI8JIp8fgCvk7KwdimyxIUQ2cYxsZPnG0cy\n \ logger.go:42: 07:00:35 | watcher-tls-certs-change/2-change-public-svc-certificate | wL4XpOQoqp0ElzP3DPP1FUOAnx7ctKdash5DUzBT+84oA2Hab8yd3D9RWVXgYTYP\n logger.go:42: 07:00:35 | watcher-tls-certs-change/2-change-public-svc-certificate | tgSvfD60T1dJeiiJa5kBj0Gfj+0gKtYNueYrOo65hlC6Smd2tP/7av+xLtQ2b1JS\n \ logger.go:42: 07:00:35 | watcher-tls-certs-change/2-change-public-svc-certificate | tFAWNznSY+2n+8SyIH6jjDb/oBYfANPOIcfU/4X8dTGzLgkpMMXK3KyXfzgD9EW+\n logger.go:42: 07:00:35 | watcher-tls-certs-change/2-change-public-svc-certificate | HexV+da9g5Ix/I5gxku5bd0DWz/ovW+1o/uYMgB5ss9oqd8YKKSqkptXfAtoGToA\n \ logger.go:42: 07:00:35 | watcher-tls-certs-change/2-change-public-svc-certificate | KG8QccenVwR7kEcCAwEAAaOBjzCBjDAOBgNVHQ8BAf8EBAMCBaAwEwYDVR0lBAww\n logger.go:42: 07:00:35 | watcher-tls-certs-change/2-change-public-svc-certificate | CgYIKwYBBQUHAwEwDAYDVR0TAQH/BAIwADAfBgNVHSMEGDAWgBSy3xzAKuvmrfpX\n \ logger.go:42: 07:00:35 | watcher-tls-certs-change/2-change-public-svc-certificate | EgmGJVm/ieLg8TA2BgNVHREBAf8ELDAqgih3YXRjaGVyLXB1YmxpYy53YXRjaGVy\n logger.go:42: 07:00:35 | watcher-tls-certs-change/2-change-public-svc-certificate | LWt1dHRsLWRlZmF1bHQuc3ZjMA0GCSqGSIb3DQEBDAUAA4IBgQBmitaOVryjB1Fy\n \ logger.go:42: 07:00:35 | watcher-tls-certs-change/2-change-public-svc-certificate | o+uQ7q1CfFEXu4uk57VffzLi6GFAot2u9gqfKm5Bx/JFKe0HvUHxfyum8oGhg8Mm\n logger.go:42: 07:00:35 | watcher-tls-certs-change/2-change-public-svc-certificate | lF7bosLeepyWZeFYbXLZ3eCDrTyQRPPc8j6O8R9Ygg2wvIhwIrADK1/KTQuPVFW5\n \ logger.go:42: 07:00:35 | watcher-tls-certs-change/2-change-public-svc-certificate | DVaC7MERErbmczp6DYiELdgcqlJimK3Vp/11qNtk1Kc7wVLI2Y36CzhwVMXOBW00\n logger.go:42: 07:00:35 | watcher-tls-certs-change/2-change-public-svc-certificate | XXQxvYoWLH8WjhVMN9L0ZfTLdQIwL4gt0B3R7Uh8HR/qMCwW4RhIByG/el6wH1yh\n \ logger.go:42: 07:00:35 | watcher-tls-certs-change/2-change-public-svc-certificate | D5gkY2Jkbf1vvmmw6MBLkx5O8aL145sW8fDKMGrmRj+eO5ALLeYMdmQZXqwZ4URw\n logger.go:42: 07:00:35 | watcher-tls-certs-change/2-change-public-svc-certificate | X1tyjzN1ntrcBJf6xnuMLfT1v9RPMCT4CPeb23M5lqvwQXdrbiarkLQ9n/mBPUA6\n \ logger.go:42: 07:00:35 | watcher-tls-certs-change/2-change-public-svc-certificate | lUYWBkFqYg9XnAgkWTeehQ7+AeOtvhj6gu3st6nEkOjJs9ri3Jris8kGL9Ay5yMy\n logger.go:42: 07:00:35 | watcher-tls-certs-change/2-change-public-svc-certificate | af6ByIVzqJuVxr9GJiFIvhx+t2NEelQUiFnN1465Q67x/AELrfo=\n \ logger.go:42: 07:00:35 | watcher-tls-certs-change/2-change-public-svc-certificate | -----END CERTIFICATE-----' '!=' '-----BEGIN CERTIFICATE-----\n logger.go:42: 07:00:35 | watcher-tls-certs-change/2-change-public-svc-certificate | MIID6jCCAlKgAwIBAgIQaET3jSKb03pJAQEOx1sP8jANBgkqhkiG9w0BAQwFADAY\n \ logger.go:42: 07:00:35 | watcher-tls-certs-change/2-change-public-svc-certificate | MRYwFAYDVQQDEw1yb290Y2EtcHVibGljMB4XDTI2MDEyMjA3MDAzNVoXDTMxMDEy\n logger.go:42: 07:00:35 | watcher-tls-certs-change/2-change-public-svc-certificate | MTA3MDAzNVowADCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAMUO91uC\n \ logger.go:42: 07:00:35 | watcher-tls-certs-change/2-change-public-svc-certificate | WtApJP220eftPqzZX3iJmtM6v4caj75kfMa8Re/FN4CY0xX684UKfFkl5wGSnbEA\n logger.go:42: 07:00:35 | watcher-tls-certs-change/2-change-public-svc-certificate | nB6ivGgBkvlfLhUaru5eP2XnPTnbN54YF5mW2xWMO5nGZ4k07Q/FWqSeLhZSf6yY\n \ logger.go:42: 07:00:35 | watcher-tls-certs-change/2-change-public-svc-certificate | c7Ifn/QckjDTU4J0B3W5MOlSHZbyV4e5BcBloR2akGH2R74SQTx/v4BQbCRzeGGl\n logger.go:42: 07:00:35 | watcher-tls-certs-change/2-change-public-svc-certificate | adfMoWrCWUBRsSxhG/evB0XoWp0xeiOQoAndabr4auk1XzBqQyLwenOU+CivlIC7\n \ logger.go:42: 07:00:35 | watcher-tls-certs-change/2-change-public-svc-certificate | +81MamYtHDXZdqly0FQCjSx5l3BzAOQRsAgHbYp//gzb28KOdevORaT2xe/EQuda\n logger.go:42: 07:00:35 | watcher-tls-certs-change/2-change-public-svc-certificate | gqgHYmSQTwJ4EscCAwEAAaOBxzCBxDAOBgNVHQ8BAf8EBAMCBaAwEwYDVR0lBAww\n \ logger.go:42: 07:00:35 | watcher-tls-certs-change/2-change-public-svc-certificate | CgYIKwYBBQUHAwEwDAYDVR0TAQH/BAIwADAfBgNVHSMEGDAWgBSy3xzAKuvmrfpX\n logger.go:42: 07:00:35 | watcher-tls-certs-change/2-change-public-svc-certificate | EgmGJVm/ieLg8TBuBgNVHREBAf8EZDBigih3YXRjaGVyLXB1YmxpYy53YXRjaGVy\n \ logger.go:42: 07:00:35 | watcher-tls-certs-change/2-change-public-svc-certificate | LWt1dHRsLWRlZmF1bHQuc3ZjgjZ3YXRjaGVyLXB1YmxpYy53YXRjaGVyLWt1dHRs\n logger.go:42: 07:00:35 | watcher-tls-certs-change/2-change-public-svc-certificate | LWRlZmF1bHQuc3ZjLmNsdXN0ZXIubG9jYWwwDQYJKoZIhvcNAQEMBQADggGBACpS\n \ logger.go:42: 07:00:35 | watcher-tls-certs-change/2-change-public-svc-certificate | JbVH3n79slClC6DdMUQHLtnvKYoL33kgMsiwf3P6qByavtbQn8lPJyD1m/xsrRGs\n logger.go:42: 07:00:35 | watcher-tls-certs-change/2-change-public-svc-certificate | 3uBsjnZwn/rMyTOFYv9sxO/5PkOiTHmsCW0Q9jkMzI/WycjZCnAlMX9+LRj7Dl8e\n \ logger.go:42: 07:00:35 | watcher-tls-certs-change/2-change-public-svc-certificate | d6eyo3wukodG1taKDTVGqlmuv2mGd6Xf506cK2/mf/w556quUaYi9N2e4YY1BNly\n logger.go:42: 07:00:35 | watcher-tls-certs-change/2-change-public-svc-certificate | ynm3VjHEyuigghb7wbMzwo+FhmxyYBVYlekM7bjhGbxmBycVFc5WvkSdV8MqGksj\n \ logger.go:42: 07:00:35 | watcher-tls-certs-change/2-change-public-svc-certificate | tjF7CNFK+/KxETpnxRQCnAXEsTjjHnc2yc35qR3GdT3353LSejPl3HONjc+IfQCC\n logger.go:42: 07:00:35 | watcher-tls-certs-change/2-change-public-svc-certificate | grgIUD2qGizLxPAfdSCnaZZKjryCJZ0+CuAI1K8RsxvLn3DT7H8GQ74ZnA8X9l7e\n \ logger.go:42: 07:00:35 | watcher-tls-certs-change/2-change-public-svc-certificate | u37MJKf/IEW0RK82TTgxAMZpZKgCnz47NTqXlYPZUnvX85DxWuCfakyvObVyFIif\n logger.go:42: 07:00:35 | watcher-tls-certs-change/2-change-public-svc-certificate | OsRjhko6IWIMauEaPp2blS9MFbTOAMCM2NzpXGE9gyJxekasxuv5Ifq3TBk/oA==\n \ logger.go:42: 07:00:35 | watcher-tls-certs-change/2-change-public-svc-certificate | -----END CERTIFICATE-----' ']'\n logger.go:42: 07:00:35 | watcher-tls-certs-change/2-change-public-svc-certificate | + exit 1\n logger.go:42: 07:00:36 | watcher-tls-certs-change/2-change-public-svc-certificate | running command: [sh -c set -euxo pipefail\n svc_cert=$(oc rsh -n $NAMESPACE -c watcher-api watcher-kuttl-api-0 cat /etc/pki/tls/certs/public.crt)\n secret_cert=$(oc get -n $NAMESPACE secret cert-watcher-public-svc -o jsonpath='{.data.tls\\.crt}' | base64 --decode)\n # ensure that the svc secret and cert secret match\n \ if [ \"${svc_cert}\" != \"${secret_cert}\" ]; then\n exit 1\n fi\n \ ]\n logger.go:42: 07:00:36 | watcher-tls-certs-change/2-change-public-svc-certificate | ++ oc rsh -n watcher-kuttl-default -c watcher-api watcher-kuttl-api-0 cat /etc/pki/tls/certs/public.crt\n \ logger.go:42: 07:00:36 | watcher-tls-certs-change/2-change-public-svc-certificate | error: unable to upgrade connection: container not found (\"watcher-api\")\n logger.go:42: 07:00:36 | watcher-tls-certs-change/2-change-public-svc-certificate | + svc_cert=\n \ logger.go:42: 07:00:37 | watcher-tls-certs-change/2-change-public-svc-certificate | running command: [sh -c set -euxo pipefail\n svc_cert=$(oc rsh -n $NAMESPACE -c watcher-api watcher-kuttl-api-0 cat /etc/pki/tls/certs/public.crt)\n secret_cert=$(oc get -n $NAMESPACE secret cert-watcher-public-svc -o jsonpath='{.data.tls\\.crt}' | base64 --decode)\n # ensure that the svc secret and cert secret match\n \ if [ \"${svc_cert}\" != \"${secret_cert}\" ]; then\n exit 1\n fi\n \ ]\n logger.go:42: 07:00:37 | watcher-tls-certs-change/2-change-public-svc-certificate | ++ oc rsh -n watcher-kuttl-default -c watcher-api watcher-kuttl-api-0 cat /etc/pki/tls/certs/public.crt\n \ logger.go:42: 07:00:38 | watcher-tls-certs-change/2-change-public-svc-certificate | error: unable to upgrade connection: container not found (\"watcher-api\")\n logger.go:42: 07:00:38 | watcher-tls-certs-change/2-change-public-svc-certificate | + svc_cert=\n \ logger.go:42: 07:00:39 | watcher-tls-certs-change/2-change-public-svc-certificate | running command: [sh -c set -euxo pipefail\n svc_cert=$(oc rsh -n $NAMESPACE -c watcher-api watcher-kuttl-api-0 cat /etc/pki/tls/certs/public.crt)\n secret_cert=$(oc get -n $NAMESPACE secret cert-watcher-public-svc -o jsonpath='{.data.tls\\.crt}' | base64 --decode)\n # ensure that the svc secret and cert secret match\n \ if [ \"${svc_cert}\" != \"${secret_cert}\" ]; then\n exit 1\n fi\n \ ]\n logger.go:42: 07:00:39 | watcher-tls-certs-change/2-change-public-svc-certificate | ++ oc rsh -n watcher-kuttl-default -c watcher-api watcher-kuttl-api-0 cat /etc/pki/tls/certs/public.crt\n \ logger.go:42: 07:00:39 | watcher-tls-certs-change/2-change-public-svc-certificate | + svc_cert='-----BEGIN CERTIFICATE-----\n logger.go:42: 07:00:39 | watcher-tls-certs-change/2-change-public-svc-certificate | MIID6jCCAlKgAwIBAgIQaET3jSKb03pJAQEOx1sP8jANBgkqhkiG9w0BAQwFADAY\n logger.go:42: 07:00:39 | watcher-tls-certs-change/2-change-public-svc-certificate | MRYwFAYDVQQDEw1yb290Y2EtcHVibGljMB4XDTI2MDEyMjA3MDAzNVoXDTMxMDEy\n \ logger.go:42: 07:00:39 | watcher-tls-certs-change/2-change-public-svc-certificate | MTA3MDAzNVowADCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAMUO91uC\n logger.go:42: 07:00:39 | watcher-tls-certs-change/2-change-public-svc-certificate | WtApJP220eftPqzZX3iJmtM6v4caj75kfMa8Re/FN4CY0xX684UKfFkl5wGSnbEA\n \ logger.go:42: 07:00:39 | watcher-tls-certs-change/2-change-public-svc-certificate | nB6ivGgBkvlfLhUaru5eP2XnPTnbN54YF5mW2xWMO5nGZ4k07Q/FWqSeLhZSf6yY\n logger.go:42: 07:00:39 | watcher-tls-certs-change/2-change-public-svc-certificate | c7Ifn/QckjDTU4J0B3W5MOlSHZbyV4e5BcBloR2akGH2R74SQTx/v4BQbCRzeGGl\n \ logger.go:42: 07:00:39 | watcher-tls-certs-change/2-change-public-svc-certificate | adfMoWrCWUBRsSxhG/evB0XoWp0xeiOQoAndabr4auk1XzBqQyLwenOU+CivlIC7\n logger.go:42: 07:00:39 | watcher-tls-certs-change/2-change-public-svc-certificate | +81MamYtHDXZdqly0FQCjSx5l3BzAOQRsAgHbYp//gzb28KOdevORaT2xe/EQuda\n \ logger.go:42: 07:00:39 | watcher-tls-certs-change/2-change-public-svc-certificate | gqgHYmSQTwJ4EscCAwEAAaOBxzCBxDAOBgNVHQ8BAf8EBAMCBaAwEwYDVR0lBAww\n logger.go:42: 07:00:39 | watcher-tls-certs-change/2-change-public-svc-certificate | CgYIKwYBBQUHAwEwDAYDVR0TAQH/BAIwADAfBgNVHSMEGDAWgBSy3xzAKuvmrfpX\n \ logger.go:42: 07:00:39 | watcher-tls-certs-change/2-change-public-svc-certificate | EgmGJVm/ieLg8TBuBgNVHREBAf8EZDBigih3YXRjaGVyLXB1YmxpYy53YXRjaGVy\n logger.go:42: 07:00:39 | watcher-tls-certs-change/2-change-public-svc-certificate | LWt1dHRsLWRlZmF1bHQuc3ZjgjZ3YXRjaGVyLXB1YmxpYy53YXRjaGVyLWt1dHRs\n \ logger.go:42: 07:00:39 | watcher-tls-certs-change/2-change-public-svc-certificate | LWRlZmF1bHQuc3ZjLmNsdXN0ZXIubG9jYWwwDQYJKoZIhvcNAQEMBQADggGBACpS\n logger.go:42: 07:00:39 | watcher-tls-certs-change/2-change-public-svc-certificate | JbVH3n79slClC6DdMUQHLtnvKYoL33kgMsiwf3P6qByavtbQn8lPJyD1m/xsrRGs\n \ logger.go:42: 07:00:39 | watcher-tls-certs-change/2-change-public-svc-certificate | 3uBsjnZwn/rMyTOFYv9sxO/5PkOiTHmsCW0Q9jkMzI/WycjZCnAlMX9+LRj7Dl8e\n logger.go:42: 07:00:39 | watcher-tls-certs-change/2-change-public-svc-certificate | d6eyo3wukodG1taKDTVGqlmuv2mGd6Xf506cK2/mf/w556quUaYi9N2e4YY1BNly\n \ logger.go:42: 07:00:39 | watcher-tls-certs-change/2-change-public-svc-certificate | ynm3VjHEyuigghb7wbMzwo+FhmxyYBVYlekM7bjhGbxmBycVFc5WvkSdV8MqGksj\n logger.go:42: 07:00:39 | watcher-tls-certs-change/2-change-public-svc-certificate | tjF7CNFK+/KxETpnxRQCnAXEsTjjHnc2yc35qR3GdT3353LSejPl3HONjc+IfQCC\n \ logger.go:42: 07:00:39 | watcher-tls-certs-change/2-change-public-svc-certificate | grgIUD2qGizLxPAfdSCnaZZKjryCJZ0+CuAI1K8RsxvLn3DT7H8GQ74ZnA8X9l7e\n logger.go:42: 07:00:39 | watcher-tls-certs-change/2-change-public-svc-certificate | u37MJKf/IEW0RK82TTgxAMZpZKgCnz47NTqXlYPZUnvX85DxWuCfakyvObVyFIif\n \ logger.go:42: 07:00:39 | watcher-tls-certs-change/2-change-public-svc-certificate | OsRjhko6IWIMauEaPp2blS9MFbTOAMCM2NzpXGE9gyJxekasxuv5Ifq3TBk/oA==\n logger.go:42: 07:00:39 | watcher-tls-certs-change/2-change-public-svc-certificate | -----END CERTIFICATE-----'\n \ logger.go:42: 07:00:39 | watcher-tls-certs-change/2-change-public-svc-certificate | ++ base64 --decode\n logger.go:42: 07:00:39 | watcher-tls-certs-change/2-change-public-svc-certificate | ++ oc get -n watcher-kuttl-default secret cert-watcher-public-svc -o 'jsonpath={.data.tls\\.crt}'\n \ logger.go:42: 07:00:39 | watcher-tls-certs-change/2-change-public-svc-certificate | + secret_cert='-----BEGIN CERTIFICATE-----\n logger.go:42: 07:00:39 | watcher-tls-certs-change/2-change-public-svc-certificate | MIID6jCCAlKgAwIBAgIQaET3jSKb03pJAQEOx1sP8jANBgkqhkiG9w0BAQwFADAY\n logger.go:42: 07:00:39 | watcher-tls-certs-change/2-change-public-svc-certificate | MRYwFAYDVQQDEw1yb290Y2EtcHVibGljMB4XDTI2MDEyMjA3MDAzNVoXDTMxMDEy\n \ logger.go:42: 07:00:39 | watcher-tls-certs-change/2-change-public-svc-certificate | MTA3MDAzNVowADCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAMUO91uC\n logger.go:42: 07:00:39 | watcher-tls-certs-change/2-change-public-svc-certificate | WtApJP220eftPqzZX3iJmtM6v4caj75kfMa8Re/FN4CY0xX684UKfFkl5wGSnbEA\n \ logger.go:42: 07:00:39 | watcher-tls-certs-change/2-change-public-svc-certificate | nB6ivGgBkvlfLhUaru5eP2XnPTnbN54YF5mW2xWMO5nGZ4k07Q/FWqSeLhZSf6yY\n logger.go:42: 07:00:39 | watcher-tls-certs-change/2-change-public-svc-certificate | c7Ifn/QckjDTU4J0B3W5MOlSHZbyV4e5BcBloR2akGH2R74SQTx/v4BQbCRzeGGl\n \ logger.go:42: 07:00:39 | watcher-tls-certs-change/2-change-public-svc-certificate | adfMoWrCWUBRsSxhG/evB0XoWp0xeiOQoAndabr4auk1XzBqQyLwenOU+CivlIC7\n logger.go:42: 07:00:39 | watcher-tls-certs-change/2-change-public-svc-certificate | +81MamYtHDXZdqly0FQCjSx5l3BzAOQRsAgHbYp//gzb28KOdevORaT2xe/EQuda\n \ logger.go:42: 07:00:39 | watcher-tls-certs-change/2-change-public-svc-certificate | gqgHYmSQTwJ4EscCAwEAAaOBxzCBxDAOBgNVHQ8BAf8EBAMCBaAwEwYDVR0lBAww\n logger.go:42: 07:00:39 | watcher-tls-certs-change/2-change-public-svc-certificate | CgYIKwYBBQUHAwEwDAYDVR0TAQH/BAIwADAfBgNVHSMEGDAWgBSy3xzAKuvmrfpX\n \ logger.go:42: 07:00:39 | watcher-tls-certs-change/2-change-public-svc-certificate | EgmGJVm/ieLg8TBuBgNVHREBAf8EZDBigih3YXRjaGVyLXB1YmxpYy53YXRjaGVy\n logger.go:42: 07:00:39 | watcher-tls-certs-change/2-change-public-svc-certificate | LWt1dHRsLWRlZmF1bHQuc3ZjgjZ3YXRjaGVyLXB1YmxpYy53YXRjaGVyLWt1dHRs\n \ logger.go:42: 07:00:39 | watcher-tls-certs-change/2-change-public-svc-certificate | LWRlZmF1bHQuc3ZjLmNsdXN0ZXIubG9jYWwwDQYJKoZIhvcNAQEMBQADggGBACpS\n logger.go:42: 07:00:39 | watcher-tls-certs-change/2-change-public-svc-certificate | JbVH3n79slClC6DdMUQHLtnvKYoL33kgMsiwf3P6qByavtbQn8lPJyD1m/xsrRGs\n \ logger.go:42: 07:00:39 | watcher-tls-certs-change/2-change-public-svc-certificate | 3uBsjnZwn/rMyTOFYv9sxO/5PkOiTHmsCW0Q9jkMzI/WycjZCnAlMX9+LRj7Dl8e\n logger.go:42: 07:00:39 | watcher-tls-certs-change/2-change-public-svc-certificate | d6eyo3wukodG1taKDTVGqlmuv2mGd6Xf506cK2/mf/w556quUaYi9N2e4YY1BNly\n \ logger.go:42: 07:00:39 | watcher-tls-certs-change/2-change-public-svc-certificate | ynm3VjHEyuigghb7wbMzwo+FhmxyYBVYlekM7bjhGbxmBycVFc5WvkSdV8MqGksj\n logger.go:42: 07:00:39 | watcher-tls-certs-change/2-change-public-svc-certificate | tjF7CNFK+/KxETpnxRQCnAXEsTjjHnc2yc35qR3GdT3353LSejPl3HONjc+IfQCC\n \ logger.go:42: 07:00:39 | watcher-tls-certs-change/2-change-public-svc-certificate | grgIUD2qGizLxPAfdSCnaZZKjryCJZ0+CuAI1K8RsxvLn3DT7H8GQ74ZnA8X9l7e\n logger.go:42: 07:00:39 | watcher-tls-certs-change/2-change-public-svc-certificate | u37MJKf/IEW0RK82TTgxAMZpZKgCnz47NTqXlYPZUnvX85DxWuCfakyvObVyFIif\n \ logger.go:42: 07:00:39 | watcher-tls-certs-change/2-change-public-svc-certificate | OsRjhko6IWIMauEaPp2blS9MFbTOAMCM2NzpXGE9gyJxekasxuv5Ifq3TBk/oA==\n logger.go:42: 07:00:39 | watcher-tls-certs-change/2-change-public-svc-certificate | -----END CERTIFICATE-----'\n \ logger.go:42: 07:00:39 | watcher-tls-certs-change/2-change-public-svc-certificate | + '[' '-----BEGIN CERTIFICATE-----\n logger.go:42: 07:00:39 | watcher-tls-certs-change/2-change-public-svc-certificate | MIID6jCCAlKgAwIBAgIQaET3jSKb03pJAQEOx1sP8jANBgkqhkiG9w0BAQwFADAY\n logger.go:42: 07:00:39 | watcher-tls-certs-change/2-change-public-svc-certificate | MRYwFAYDVQQDEw1yb290Y2EtcHVibGljMB4XDTI2MDEyMjA3MDAzNVoXDTMxMDEy\n \ logger.go:42: 07:00:39 | watcher-tls-certs-change/2-change-public-svc-certificate | MTA3MDAzNVowADCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAMUO91uC\n logger.go:42: 07:00:39 | watcher-tls-certs-change/2-change-public-svc-certificate | WtApJP220eftPqzZX3iJmtM6v4caj75kfMa8Re/FN4CY0xX684UKfFkl5wGSnbEA\n \ logger.go:42: 07:00:39 | watcher-tls-certs-change/2-change-public-svc-certificate | nB6ivGgBkvlfLhUaru5eP2XnPTnbN54YF5mW2xWMO5nGZ4k07Q/FWqSeLhZSf6yY\n logger.go:42: 07:00:39 | watcher-tls-certs-change/2-change-public-svc-certificate | c7Ifn/QckjDTU4J0B3W5MOlSHZbyV4e5BcBloR2akGH2R74SQTx/v4BQbCRzeGGl\n \ logger.go:42: 07:00:39 | watcher-tls-certs-change/2-change-public-svc-certificate | adfMoWrCWUBRsSxhG/evB0XoWp0xeiOQoAndabr4auk1XzBqQyLwenOU+CivlIC7\n logger.go:42: 07:00:39 | watcher-tls-certs-change/2-change-public-svc-certificate | +81MamYtHDXZdqly0FQCjSx5l3BzAOQRsAgHbYp//gzb28KOdevORaT2xe/EQuda\n \ logger.go:42: 07:00:39 | watcher-tls-certs-change/2-change-public-svc-certificate | gqgHYmSQTwJ4EscCAwEAAaOBxzCBxDAOBgNVHQ8BAf8EBAMCBaAwEwYDVR0lBAww\n logger.go:42: 07:00:39 | watcher-tls-certs-change/2-change-public-svc-certificate | CgYIKwYBBQUHAwEwDAYDVR0TAQH/BAIwADAfBgNVHSMEGDAWgBSy3xzAKuvmrfpX\n \ logger.go:42: 07:00:39 | watcher-tls-certs-change/2-change-public-svc-certificate | EgmGJVm/ieLg8TBuBgNVHREBAf8EZDBigih3YXRjaGVyLXB1YmxpYy53YXRjaGVy\n logger.go:42: 07:00:39 | watcher-tls-certs-change/2-change-public-svc-certificate | LWt1dHRsLWRlZmF1bHQuc3ZjgjZ3YXRjaGVyLXB1YmxpYy53YXRjaGVyLWt1dHRs\n \ logger.go:42: 07:00:39 | watcher-tls-certs-change/2-change-public-svc-certificate | LWRlZmF1bHQuc3ZjLmNsdXN0ZXIubG9jYWwwDQYJKoZIhvcNAQEMBQADggGBACpS\n logger.go:42: 07:00:39 | watcher-tls-certs-change/2-change-public-svc-certificate | JbVH3n79slClC6DdMUQHLtnvKYoL33kgMsiwf3P6qByavtbQn8lPJyD1m/xsrRGs\n \ logger.go:42: 07:00:39 | watcher-tls-certs-change/2-change-public-svc-certificate | 3uBsjnZwn/rMyTOFYv9sxO/5PkOiTHmsCW0Q9jkMzI/WycjZCnAlMX9+LRj7Dl8e\n logger.go:42: 07:00:39 | watcher-tls-certs-change/2-change-public-svc-certificate | d6eyo3wukodG1taKDTVGqlmuv2mGd6Xf506cK2/mf/w556quUaYi9N2e4YY1BNly\n \ logger.go:42: 07:00:39 | watcher-tls-certs-change/2-change-public-svc-certificate | ynm3VjHEyuigghb7wbMzwo+FhmxyYBVYlekM7bjhGbxmBycVFc5WvkSdV8MqGksj\n logger.go:42: 07:00:39 | watcher-tls-certs-change/2-change-public-svc-certificate | tjF7CNFK+/KxETpnxRQCnAXEsTjjHnc2yc35qR3GdT3353LSejPl3HONjc+IfQCC\n \ logger.go:42: 07:00:39 | watcher-tls-certs-change/2-change-public-svc-certificate | grgIUD2qGizLxPAfdSCnaZZKjryCJZ0+CuAI1K8RsxvLn3DT7H8GQ74ZnA8X9l7e\n logger.go:42: 07:00:39 | watcher-tls-certs-change/2-change-public-svc-certificate | u37MJKf/IEW0RK82TTgxAMZpZKgCnz47NTqXlYPZUnvX85DxWuCfakyvObVyFIif\n \ logger.go:42: 07:00:39 | watcher-tls-certs-change/2-change-public-svc-certificate | OsRjhko6IWIMauEaPp2blS9MFbTOAMCM2NzpXGE9gyJxekasxuv5Ifq3TBk/oA==\n logger.go:42: 07:00:39 | watcher-tls-certs-change/2-change-public-svc-certificate | -----END CERTIFICATE-----' '!=' '-----BEGIN CERTIFICATE-----\n logger.go:42: 07:00:39 | watcher-tls-certs-change/2-change-public-svc-certificate | MIID6jCCAlKgAwIBAgIQaET3jSKb03pJAQEOx1sP8jANBgkqhkiG9w0BAQwFADAY\n logger.go:42: 07:00:39 | watcher-tls-certs-change/2-change-public-svc-certificate | MRYwFAYDVQQDEw1yb290Y2EtcHVibGljMB4XDTI2MDEyMjA3MDAzNVoXDTMxMDEy\n \ logger.go:42: 07:00:39 | watcher-tls-certs-change/2-change-public-svc-certificate | MTA3MDAzNVowADCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAMUO91uC\n logger.go:42: 07:00:39 | watcher-tls-certs-change/2-change-public-svc-certificate | WtApJP220eftPqzZX3iJmtM6v4caj75kfMa8Re/FN4CY0xX684UKfFkl5wGSnbEA\n \ logger.go:42: 07:00:39 | watcher-tls-certs-change/2-change-public-svc-certificate | nB6ivGgBkvlfLhUaru5eP2XnPTnbN54YF5mW2xWMO5nGZ4k07Q/FWqSeLhZSf6yY\n logger.go:42: 07:00:39 | watcher-tls-certs-change/2-change-public-svc-certificate | c7Ifn/QckjDTU4J0B3W5MOlSHZbyV4e5BcBloR2akGH2R74SQTx/v4BQbCRzeGGl\n \ logger.go:42: 07:00:39 | watcher-tls-certs-change/2-change-public-svc-certificate | adfMoWrCWUBRsSxhG/evB0XoWp0xeiOQoAndabr4auk1XzBqQyLwenOU+CivlIC7\n logger.go:42: 07:00:39 | watcher-tls-certs-change/2-change-public-svc-certificate | +81MamYtHDXZdqly0FQCjSx5l3BzAOQRsAgHbYp//gzb28KOdevORaT2xe/EQuda\n \ logger.go:42: 07:00:39 | watcher-tls-certs-change/2-change-public-svc-certificate | gqgHYmSQTwJ4EscCAwEAAaOBxzCBxDAOBgNVHQ8BAf8EBAMCBaAwEwYDVR0lBAww\n logger.go:42: 07:00:39 | watcher-tls-certs-change/2-change-public-svc-certificate | CgYIKwYBBQUHAwEwDAYDVR0TAQH/BAIwADAfBgNVHSMEGDAWgBSy3xzAKuvmrfpX\n \ logger.go:42: 07:00:39 | watcher-tls-certs-change/2-change-public-svc-certificate | EgmGJVm/ieLg8TBuBgNVHREBAf8EZDBigih3YXRjaGVyLXB1YmxpYy53YXRjaGVy\n logger.go:42: 07:00:39 | watcher-tls-certs-change/2-change-public-svc-certificate | LWt1dHRsLWRlZmF1bHQuc3ZjgjZ3YXRjaGVyLXB1YmxpYy53YXRjaGVyLWt1dHRs\n \ logger.go:42: 07:00:39 | watcher-tls-certs-change/2-change-public-svc-certificate | LWRlZmF1bHQuc3ZjLmNsdXN0ZXIubG9jYWwwDQYJKoZIhvcNAQEMBQADggGBACpS\n logger.go:42: 07:00:39 | watcher-tls-certs-change/2-change-public-svc-certificate | JbVH3n79slClC6DdMUQHLtnvKYoL33kgMsiwf3P6qByavtbQn8lPJyD1m/xsrRGs\n \ logger.go:42: 07:00:39 | watcher-tls-certs-change/2-change-public-svc-certificate | 3uBsjnZwn/rMyTOFYv9sxO/5PkOiTHmsCW0Q9jkMzI/WycjZCnAlMX9+LRj7Dl8e\n logger.go:42: 07:00:39 | watcher-tls-certs-change/2-change-public-svc-certificate | d6eyo3wukodG1taKDTVGqlmuv2mGd6Xf506cK2/mf/w556quUaYi9N2e4YY1BNly\n \ logger.go:42: 07:00:39 | watcher-tls-certs-change/2-change-public-svc-certificate | ynm3VjHEyuigghb7wbMzwo+FhmxyYBVYlekM7bjhGbxmBycVFc5WvkSdV8MqGksj\n logger.go:42: 07:00:39 | watcher-tls-certs-change/2-change-public-svc-certificate | tjF7CNFK+/KxETpnxRQCnAXEsTjjHnc2yc35qR3GdT3353LSejPl3HONjc+IfQCC\n \ logger.go:42: 07:00:39 | watcher-tls-certs-change/2-change-public-svc-certificate | grgIUD2qGizLxPAfdSCnaZZKjryCJZ0+CuAI1K8RsxvLn3DT7H8GQ74ZnA8X9l7e\n logger.go:42: 07:00:39 | watcher-tls-certs-change/2-change-public-svc-certificate | u37MJKf/IEW0RK82TTgxAMZpZKgCnz47NTqXlYPZUnvX85DxWuCfakyvObVyFIif\n \ logger.go:42: 07:00:39 | watcher-tls-certs-change/2-change-public-svc-certificate | OsRjhko6IWIMauEaPp2blS9MFbTOAMCM2NzpXGE9gyJxekasxuv5Ifq3TBk/oA==\n logger.go:42: 07:00:39 | watcher-tls-certs-change/2-change-public-svc-certificate | -----END CERTIFICATE-----' ']'\n logger.go:42: 07:00:39 | watcher-tls-certs-change/2-change-public-svc-certificate | test step completed 2-change-public-svc-certificate\n logger.go:42: 07:00:39 | watcher-tls-certs-change/3-change-internal-svc-certificate | starting test step 3-change-internal-svc-certificate\n logger.go:42: 07:00:39 | watcher-tls-certs-change/3-change-internal-svc-certificate | running command: [sh -c oc patch Certificate -n $NAMESPACE watcher-internal-svc --type='json' -p='[{\"op\": \"replace\", \"path\": \"/spec/dnsNames\", \"value\":['watcher-internal.watcher-kuttl-default.svc', 'watcher-internal.watcher-kuttl-default.svc.cluster.local']}]'\n ]\n logger.go:42: 07:00:39 | watcher-tls-certs-change/3-change-internal-svc-certificate | certificate.cert-manager.io/watcher-internal-svc patched\n logger.go:42: 07:00:39 | watcher-tls-certs-change/3-change-internal-svc-certificate | running command: [sh -c set -euxo pipefail\n svc_cert=$(oc rsh -n $NAMESPACE -c watcher-api watcher-kuttl-api-0 cat /etc/pki/tls/certs/internal.crt)\n secret_cert=$(oc get -n $NAMESPACE secret cert-watcher-internal-svc -o jsonpath='{.data.tls\\.crt}' | base64 --decode)\n # ensure that the svc secret and cert secret match\n \ if [ \"${svc_cert}\" != \"${secret_cert}\" ]; then\n exit 1\n fi\n \ ]\n logger.go:42: 07:00:39 | watcher-tls-certs-change/3-change-internal-svc-certificate | ++ oc rsh -n watcher-kuttl-default -c watcher-api watcher-kuttl-api-0 cat /etc/pki/tls/certs/internal.crt\n \ logger.go:42: 07:00:40 | watcher-tls-certs-change/3-change-internal-svc-certificate | + svc_cert='-----BEGIN CERTIFICATE-----\n logger.go:42: 07:00:40 | watcher-tls-certs-change/3-change-internal-svc-certificate | MIIDtzCCAh+gAwIBAgIRAMdtA66/UVErgQ6jB2Mo7g8wDQYJKoZIhvcNAQEMBQAw\n logger.go:42: 07:00:40 | watcher-tls-certs-change/3-change-internal-svc-certificate | GjEYMBYGA1UEAxMPcm9vdGNhLWludGVybmFsMB4XDTI2MDEyMjA3MDAwN1oXDTMx\n \ logger.go:42: 07:00:40 | watcher-tls-certs-change/3-change-internal-svc-certificate | MDEyMTA3MDAwN1owADCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAOzY\n logger.go:42: 07:00:40 | watcher-tls-certs-change/3-change-internal-svc-certificate | hUVeRLlVB60sJbdpaQAJI2vpbeEEDa91MsWoYlWF58nvtb17k8KGvpPNemDF4UvD\n \ logger.go:42: 07:00:40 | watcher-tls-certs-change/3-change-internal-svc-certificate | kRm+oSlbUhWwlbwI1Q3VFz4JhvU12PvdUhV4e5PbNGULUDGpAbr04w098HEQSUj+\n logger.go:42: 07:00:40 | watcher-tls-certs-change/3-change-internal-svc-certificate | D/44rQFA0PbvLa//wISJcF6NG2kZiiAFAlrfYJH5tJM2nbdDOvHRY2FScNwLw3aE\n \ logger.go:42: 07:00:40 | watcher-tls-certs-change/3-change-internal-svc-certificate | 1UGIp9tmMA1on0l+9tBcndNzSq8klq5TdwDo8Pwrqz8m37ZXvwD1gizzDXwZhiPN\n logger.go:42: 07:00:40 | watcher-tls-certs-change/3-change-internal-svc-certificate | KIvLKp0hAxvswvj5Rib80zGQX6JjJEcOBFZ4GT/e8WUjrsJ6JsTygAtF9Y8xwchO\n \ logger.go:42: 07:00:40 | watcher-tls-certs-change/3-change-internal-svc-certificate | Ce4xdKeeGZ8hDK2ojFcCAwEAAaOBkTCBjjAOBgNVHQ8BAf8EBAMCBaAwEwYDVR0l\n logger.go:42: 07:00:40 | watcher-tls-certs-change/3-change-internal-svc-certificate | BAwwCgYIKwYBBQUHAwEwDAYDVR0TAQH/BAIwADAfBgNVHSMEGDAWgBSEvgxWYqXo\n \ logger.go:42: 07:00:40 | watcher-tls-certs-change/3-change-internal-svc-certificate | VaasRUQLJzZAsO/HJTA4BgNVHREBAf8ELjAsgip3YXRjaGVyLWludGVybmFsLndh\n logger.go:42: 07:00:40 | watcher-tls-certs-change/3-change-internal-svc-certificate | dGNoZXIta3V0dGwtZGVmYXVsdC5zdmMwDQYJKoZIhvcNAQEMBQADggGBAJzAl8kI\n \ logger.go:42: 07:00:40 | watcher-tls-certs-change/3-change-internal-svc-certificate | fRCmnFBnPIq9+3XfefG0pybCCtUusBuoATJYDHmH/379Rb4zeN6bqXg7PP36qVao\n logger.go:42: 07:00:40 | watcher-tls-certs-change/3-change-internal-svc-certificate | 98lAd5LFEieOBPJq2+QXo8CWrLcbItzbOHdFeVStfpuQqaD2OMYLLepP/IpAKT+0\n \ logger.go:42: 07:00:40 | watcher-tls-certs-change/3-change-internal-svc-certificate | +GEHqjJsaHF3xj6g5HcvQzhSaQh3Pg8ffKmqJKQg95PF1BPQ110MawKYh9MBibPr\n logger.go:42: 07:00:40 | watcher-tls-certs-change/3-change-internal-svc-certificate | b0myrALIel2CeIeWqm+VMrMI8en/gFxJWTfg58GcQbRiUDzWa2BHE1mEoXmOLNDt\n \ logger.go:42: 07:00:40 | watcher-tls-certs-change/3-change-internal-svc-certificate | L1iYWnFugdfAycs2+uAkPMbpav3Oi3WJX1N1iKRklNv5Cq8oQ++b8aoR2RZPnJJE\n logger.go:42: 07:00:40 | watcher-tls-certs-change/3-change-internal-svc-certificate | ppmkEBU4Kt94NO5B/A2PnjkM8+yOQjhjVnyD5Ccp8kHx0MN7H1OW9J0m4JEPjP/c\n \ logger.go:42: 07:00:40 | watcher-tls-certs-change/3-change-internal-svc-certificate | uRTxWb4yoJl6Igdfy3tYgldteKztKgf4tDlkR/Htsf8GVL+zbsnTseYY5BakyygP\n logger.go:42: 07:00:40 | watcher-tls-certs-change/3-change-internal-svc-certificate | AHnxS0D1QCA5Zv5craVd+a2ZCNvbnSdl4pwShL796A0cNZGnAKDD1LwcYQ==\n \ logger.go:42: 07:00:40 | watcher-tls-certs-change/3-change-internal-svc-certificate | -----END CERTIFICATE-----'\n logger.go:42: 07:00:40 | watcher-tls-certs-change/3-change-internal-svc-certificate | ++ oc get -n watcher-kuttl-default secret cert-watcher-internal-svc -o 'jsonpath={.data.tls\\.crt}'\n \ logger.go:42: 07:00:40 | watcher-tls-certs-change/3-change-internal-svc-certificate | ++ base64 --decode\n logger.go:42: 07:00:40 | watcher-tls-certs-change/3-change-internal-svc-certificate | + secret_cert='-----BEGIN CERTIFICATE-----\n logger.go:42: 07:00:40 | watcher-tls-certs-change/3-change-internal-svc-certificate | MIIDtzCCAh+gAwIBAgIRAMdtA66/UVErgQ6jB2Mo7g8wDQYJKoZIhvcNAQEMBQAw\n logger.go:42: 07:00:40 | watcher-tls-certs-change/3-change-internal-svc-certificate | GjEYMBYGA1UEAxMPcm9vdGNhLWludGVybmFsMB4XDTI2MDEyMjA3MDAwN1oXDTMx\n \ logger.go:42: 07:00:40 | watcher-tls-certs-change/3-change-internal-svc-certificate | MDEyMTA3MDAwN1owADCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAOzY\n logger.go:42: 07:00:40 | watcher-tls-certs-change/3-change-internal-svc-certificate | hUVeRLlVB60sJbdpaQAJI2vpbeEEDa91MsWoYlWF58nvtb17k8KGvpPNemDF4UvD\n \ logger.go:42: 07:00:40 | watcher-tls-certs-change/3-change-internal-svc-certificate | kRm+oSlbUhWwlbwI1Q3VFz4JhvU12PvdUhV4e5PbNGULUDGpAbr04w098HEQSUj+\n logger.go:42: 07:00:40 | watcher-tls-certs-change/3-change-internal-svc-certificate | D/44rQFA0PbvLa//wISJcF6NG2kZiiAFAlrfYJH5tJM2nbdDOvHRY2FScNwLw3aE\n \ logger.go:42: 07:00:40 | watcher-tls-certs-change/3-change-internal-svc-certificate | 1UGIp9tmMA1on0l+9tBcndNzSq8klq5TdwDo8Pwrqz8m37ZXvwD1gizzDXwZhiPN\n logger.go:42: 07:00:40 | watcher-tls-certs-change/3-change-internal-svc-certificate | KIvLKp0hAxvswvj5Rib80zGQX6JjJEcOBFZ4GT/e8WUjrsJ6JsTygAtF9Y8xwchO\n \ logger.go:42: 07:00:40 | watcher-tls-certs-change/3-change-internal-svc-certificate | Ce4xdKeeGZ8hDK2ojFcCAwEAAaOBkTCBjjAOBgNVHQ8BAf8EBAMCBaAwEwYDVR0l\n logger.go:42: 07:00:40 | watcher-tls-certs-change/3-change-internal-svc-certificate | BAwwCgYIKwYBBQUHAwEwDAYDVR0TAQH/BAIwADAfBgNVHSMEGDAWgBSEvgxWYqXo\n \ logger.go:42: 07:00:40 | watcher-tls-certs-change/3-change-internal-svc-certificate | VaasRUQLJzZAsO/HJTA4BgNVHREBAf8ELjAsgip3YXRjaGVyLWludGVybmFsLndh\n logger.go:42: 07:00:40 | watcher-tls-certs-change/3-change-internal-svc-certificate | dGNoZXIta3V0dGwtZGVmYXVsdC5zdmMwDQYJKoZIhvcNAQEMBQADggGBAJzAl8kI\n \ logger.go:42: 07:00:40 | watcher-tls-certs-change/3-change-internal-svc-certificate | fRCmnFBnPIq9+3XfefG0pybCCtUusBuoATJYDHmH/379Rb4zeN6bqXg7PP36qVao\n logger.go:42: 07:00:40 | watcher-tls-certs-change/3-change-internal-svc-certificate | 98lAd5LFEieOBPJq2+QXo8CWrLcbItzbOHdFeVStfpuQqaD2OMYLLepP/IpAKT+0\n \ logger.go:42: 07:00:40 | watcher-tls-certs-change/3-change-internal-svc-certificate | +GEHqjJsaHF3xj6g5HcvQzhSaQh3Pg8ffKmqJKQg95PF1BPQ110MawKYh9MBibPr\n logger.go:42: 07:00:40 | watcher-tls-certs-change/3-change-internal-svc-certificate | b0myrALIel2CeIeWqm+VMrMI8en/gFxJWTfg58GcQbRiUDzWa2BHE1mEoXmOLNDt\n \ logger.go:42: 07:00:40 | watcher-tls-certs-change/3-change-internal-svc-certificate | L1iYWnFugdfAycs2+uAkPMbpav3Oi3WJX1N1iKRklNv5Cq8oQ++b8aoR2RZPnJJE\n logger.go:42: 07:00:40 | watcher-tls-certs-change/3-change-internal-svc-certificate | ppmkEBU4Kt94NO5B/A2PnjkM8+yOQjhjVnyD5Ccp8kHx0MN7H1OW9J0m4JEPjP/c\n \ logger.go:42: 07:00:40 | watcher-tls-certs-change/3-change-internal-svc-certificate | uRTxWb4yoJl6Igdfy3tYgldteKztKgf4tDlkR/Htsf8GVL+zbsnTseYY5BakyygP\n logger.go:42: 07:00:40 | watcher-tls-certs-change/3-change-internal-svc-certificate | AHnxS0D1QCA5Zv5craVd+a2ZCNvbnSdl4pwShL796A0cNZGnAKDD1LwcYQ==\n \ logger.go:42: 07:00:40 | watcher-tls-certs-change/3-change-internal-svc-certificate | -----END CERTIFICATE-----'\n logger.go:42: 07:00:40 | watcher-tls-certs-change/3-change-internal-svc-certificate | + '[' '-----BEGIN CERTIFICATE-----\n logger.go:42: 07:00:40 | watcher-tls-certs-change/3-change-internal-svc-certificate | MIIDtzCCAh+gAwIBAgIRAMdtA66/UVErgQ6jB2Mo7g8wDQYJKoZIhvcNAQEMBQAw\n logger.go:42: 07:00:40 | watcher-tls-certs-change/3-change-internal-svc-certificate | GjEYMBYGA1UEAxMPcm9vdGNhLWludGVybmFsMB4XDTI2MDEyMjA3MDAwN1oXDTMx\n \ logger.go:42: 07:00:40 | watcher-tls-certs-change/3-change-internal-svc-certificate | MDEyMTA3MDAwN1owADCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAOzY\n logger.go:42: 07:00:40 | watcher-tls-certs-change/3-change-internal-svc-certificate | hUVeRLlVB60sJbdpaQAJI2vpbeEEDa91MsWoYlWF58nvtb17k8KGvpPNemDF4UvD\n \ logger.go:42: 07:00:40 | watcher-tls-certs-change/3-change-internal-svc-certificate | kRm+oSlbUhWwlbwI1Q3VFz4JhvU12PvdUhV4e5PbNGULUDGpAbr04w098HEQSUj+\n logger.go:42: 07:00:40 | watcher-tls-certs-change/3-change-internal-svc-certificate | D/44rQFA0PbvLa//wISJcF6NG2kZiiAFAlrfYJH5tJM2nbdDOvHRY2FScNwLw3aE\n \ logger.go:42: 07:00:40 | watcher-tls-certs-change/3-change-internal-svc-certificate | 1UGIp9tmMA1on0l+9tBcndNzSq8klq5TdwDo8Pwrqz8m37ZXvwD1gizzDXwZhiPN\n logger.go:42: 07:00:40 | watcher-tls-certs-change/3-change-internal-svc-certificate | KIvLKp0hAxvswvj5Rib80zGQX6JjJEcOBFZ4GT/e8WUjrsJ6JsTygAtF9Y8xwchO\n \ logger.go:42: 07:00:40 | watcher-tls-certs-change/3-change-internal-svc-certificate | Ce4xdKeeGZ8hDK2ojFcCAwEAAaOBkTCBjjAOBgNVHQ8BAf8EBAMCBaAwEwYDVR0l\n logger.go:42: 07:00:40 | watcher-tls-certs-change/3-change-internal-svc-certificate | BAwwCgYIKwYBBQUHAwEwDAYDVR0TAQH/BAIwADAfBgNVHSMEGDAWgBSEvgxWYqXo\n \ logger.go:42: 07:00:40 | watcher-tls-certs-change/3-change-internal-svc-certificate | VaasRUQLJzZAsO/HJTA4BgNVHREBAf8ELjAsgip3YXRjaGVyLWludGVybmFsLndh\n logger.go:42: 07:00:40 | watcher-tls-certs-change/3-change-internal-svc-certificate | dGNoZXIta3V0dGwtZGVmYXVsdC5zdmMwDQYJKoZIhvcNAQEMBQADggGBAJzAl8kI\n \ logger.go:42: 07:00:40 | watcher-tls-certs-change/3-change-internal-svc-certificate | fRCmnFBnPIq9+3XfefG0pybCCtUusBuoATJYDHmH/379Rb4zeN6bqXg7PP36qVao\n logger.go:42: 07:00:40 | watcher-tls-certs-change/3-change-internal-svc-certificate | 98lAd5LFEieOBPJq2+QXo8CWrLcbItzbOHdFeVStfpuQqaD2OMYLLepP/IpAKT+0\n \ logger.go:42: 07:00:40 | watcher-tls-certs-change/3-change-internal-svc-certificate | +GEHqjJsaHF3xj6g5HcvQzhSaQh3Pg8ffKmqJKQg95PF1BPQ110MawKYh9MBibPr\n logger.go:42: 07:00:40 | watcher-tls-certs-change/3-change-internal-svc-certificate | b0myrALIel2CeIeWqm+VMrMI8en/gFxJWTfg58GcQbRiUDzWa2BHE1mEoXmOLNDt\n \ logger.go:42: 07:00:40 | watcher-tls-certs-change/3-change-internal-svc-certificate | L1iYWnFugdfAycs2+uAkPMbpav3Oi3WJX1N1iKRklNv5Cq8oQ++b8aoR2RZPnJJE\n logger.go:42: 07:00:40 | watcher-tls-certs-change/3-change-internal-svc-certificate | ppmkEBU4Kt94NO5B/A2PnjkM8+yOQjhjVnyD5Ccp8kHx0MN7H1OW9J0m4JEPjP/c\n \ logger.go:42: 07:00:40 | watcher-tls-certs-change/3-change-internal-svc-certificate | uRTxWb4yoJl6Igdfy3tYgldteKztKgf4tDlkR/Htsf8GVL+zbsnTseYY5BakyygP\n logger.go:42: 07:00:40 | watcher-tls-certs-change/3-change-internal-svc-certificate | AHnxS0D1QCA5Zv5craVd+a2ZCNvbnSdl4pwShL796A0cNZGnAKDD1LwcYQ==\n \ logger.go:42: 07:00:40 | watcher-tls-certs-change/3-change-internal-svc-certificate | -----END CERTIFICATE-----' '!=' '-----BEGIN CERTIFICATE-----\n logger.go:42: 07:00:40 | watcher-tls-certs-change/3-change-internal-svc-certificate | MIIDtzCCAh+gAwIBAgIRAMdtA66/UVErgQ6jB2Mo7g8wDQYJKoZIhvcNAQEMBQAw\n \ logger.go:42: 07:00:40 | watcher-tls-certs-change/3-change-internal-svc-certificate | GjEYMBYGA1UEAxMPcm9vdGNhLWludGVybmFsMB4XDTI2MDEyMjA3MDAwN1oXDTMx\n logger.go:42: 07:00:40 | watcher-tls-certs-change/3-change-internal-svc-certificate | MDEyMTA3MDAwN1owADCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAOzY\n \ logger.go:42: 07:00:40 | watcher-tls-certs-change/3-change-internal-svc-certificate | hUVeRLlVB60sJbdpaQAJI2vpbeEEDa91MsWoYlWF58nvtb17k8KGvpPNemDF4UvD\n logger.go:42: 07:00:40 | watcher-tls-certs-change/3-change-internal-svc-certificate | kRm+oSlbUhWwlbwI1Q3VFz4JhvU12PvdUhV4e5PbNGULUDGpAbr04w098HEQSUj+\n \ logger.go:42: 07:00:40 | watcher-tls-certs-change/3-change-internal-svc-certificate | D/44rQFA0PbvLa//wISJcF6NG2kZiiAFAlrfYJH5tJM2nbdDOvHRY2FScNwLw3aE\n logger.go:42: 07:00:40 | watcher-tls-certs-change/3-change-internal-svc-certificate | 1UGIp9tmMA1on0l+9tBcndNzSq8klq5TdwDo8Pwrqz8m37ZXvwD1gizzDXwZhiPN\n \ logger.go:42: 07:00:40 | watcher-tls-certs-change/3-change-internal-svc-certificate | KIvLKp0hAxvswvj5Rib80zGQX6JjJEcOBFZ4GT/e8WUjrsJ6JsTygAtF9Y8xwchO\n logger.go:42: 07:00:40 | watcher-tls-certs-change/3-change-internal-svc-certificate | Ce4xdKeeGZ8hDK2ojFcCAwEAAaOBkTCBjjAOBgNVHQ8BAf8EBAMCBaAwEwYDVR0l\n \ logger.go:42: 07:00:40 | watcher-tls-certs-change/3-change-internal-svc-certificate | BAwwCgYIKwYBBQUHAwEwDAYDVR0TAQH/BAIwADAfBgNVHSMEGDAWgBSEvgxWYqXo\n logger.go:42: 07:00:40 | watcher-tls-certs-change/3-change-internal-svc-certificate | VaasRUQLJzZAsO/HJTA4BgNVHREBAf8ELjAsgip3YXRjaGVyLWludGVybmFsLndh\n \ logger.go:42: 07:00:40 | watcher-tls-certs-change/3-change-internal-svc-certificate | dGNoZXIta3V0dGwtZGVmYXVsdC5zdmMwDQYJKoZIhvcNAQEMBQADggGBAJzAl8kI\n logger.go:42: 07:00:40 | watcher-tls-certs-change/3-change-internal-svc-certificate | fRCmnFBnPIq9+3XfefG0pybCCtUusBuoATJYDHmH/379Rb4zeN6bqXg7PP36qVao\n \ logger.go:42: 07:00:40 | watcher-tls-certs-change/3-change-internal-svc-certificate | 98lAd5LFEieOBPJq2+QXo8CWrLcbItzbOHdFeVStfpuQqaD2OMYLLepP/IpAKT+0\n logger.go:42: 07:00:40 | watcher-tls-certs-change/3-change-internal-svc-certificate | +GEHqjJsaHF3xj6g5HcvQzhSaQh3Pg8ffKmqJKQg95PF1BPQ110MawKYh9MBibPr\n \ logger.go:42: 07:00:40 | watcher-tls-certs-change/3-change-internal-svc-certificate | b0myrALIel2CeIeWqm+VMrMI8en/gFxJWTfg58GcQbRiUDzWa2BHE1mEoXmOLNDt\n logger.go:42: 07:00:40 | watcher-tls-certs-change/3-change-internal-svc-certificate | L1iYWnFugdfAycs2+uAkPMbpav3Oi3WJX1N1iKRklNv5Cq8oQ++b8aoR2RZPnJJE\n \ logger.go:42: 07:00:40 | watcher-tls-certs-change/3-change-internal-svc-certificate | ppmkEBU4Kt94NO5B/A2PnjkM8+yOQjhjVnyD5Ccp8kHx0MN7H1OW9J0m4JEPjP/c\n logger.go:42: 07:00:40 | watcher-tls-certs-change/3-change-internal-svc-certificate | uRTxWb4yoJl6Igdfy3tYgldteKztKgf4tDlkR/Htsf8GVL+zbsnTseYY5BakyygP\n \ logger.go:42: 07:00:40 | watcher-tls-certs-change/3-change-internal-svc-certificate | AHnxS0D1QCA5Zv5craVd+a2ZCNvbnSdl4pwShL796A0cNZGnAKDD1LwcYQ==\n logger.go:42: 07:00:40 | watcher-tls-certs-change/3-change-internal-svc-certificate | -----END CERTIFICATE-----' ']'\n logger.go:42: 07:00:40 | watcher-tls-certs-change/3-change-internal-svc-certificate | test step completed 3-change-internal-svc-certificate\n logger.go:42: 07:00:40 | watcher-tls-certs-change/4-cleanup-watcher | starting test step 4-cleanup-watcher\n \ logger.go:42: 07:00:50 | watcher-tls-certs-change/4-cleanup-watcher | test step completed 4-cleanup-watcher\n logger.go:42: 07:00:50 | watcher-tls-certs-change/5-clenaup-certs | starting test step 5-clenaup-certs\n logger.go:42: 07:00:50 | watcher-tls-certs-change/5-clenaup-certs | test step completed 5-clenaup-certs\n logger.go:42: 07:00:50 | watcher-tls-certs-change | skipping kubernetes event logging\n=== CONT kuttl/harness/watcher-tls\n logger.go:42: 07:00:50 | watcher-tls | Skipping creation of user-supplied namespace: watcher-kuttl-default\n \ logger.go:42: 07:00:50 | watcher-tls/0-cleanup-watcher | starting test step 0-cleanup-watcher\n logger.go:42: 07:00:50 | watcher-tls/0-cleanup-watcher | test step completed 0-cleanup-watcher\n logger.go:42: 07:00:50 | watcher-tls/1-deploy-with-tlse | starting test step 1-deploy-with-tlse\n logger.go:42: 07:00:50 | watcher-tls/1-deploy-with-tlse | Certificate:watcher-kuttl-default/watcher-internal-svc created\n logger.go:42: 07:00:50 | watcher-tls/1-deploy-with-tlse | Certificate:watcher-kuttl-default/watcher-public-svc created\n logger.go:42: 07:00:50 | watcher-tls/1-deploy-with-tlse | Watcher:watcher-kuttl-default/watcher-kuttl created\n logger.go:42: 07:00:50 | watcher-tls/1-deploy-with-tlse | running command: [sh -c set -euxo pipefail\n oc exec -n ${NAMESPACE} openstackclient -- openstack service list -f value -c Name -c Type |[ $(grep -c ^watcher) == 1 ]\n SERVICEID=$(oc exec -n ${NAMESPACE} openstackclient -- openstack service list -f value -c Name -c Type -c ID | grep watcher| awk '{print $1}')\n [ $(oc get -n ${NAMESPACE} keystoneservice watcher -o jsonpath={.status.serviceID}) == ${SERVICEID} ]\n [ -n \"$(oc get -n ${NAMESPACE} watcher watcher-kuttl -o jsonpath={.status.hash.dbsync})\" ]\n [ \"$(oc get -n ${NAMESPACE} secret watcher-kuttl-api-config-data -o jsonpath='{.data.my\\.cnf}'|base64 -d|grep -c 'ssl=1')\" == 1 ]\n [ \"$(oc get -n ${NAMESPACE} secret watcher-kuttl-api-config-data -o jsonpath='{.data.00-default\\.conf}'|base64 -d|grep -c 'cafile = /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem')\" == 2 ]\n # check that both endpoints have https set\n oc exec -n ${NAMESPACE} openstackclient -- openstack endpoint list | grep infra-optim | [ $(grep -c https) == 2 ]\n # If we are running the container locally, skip following test\n \ if [ \"$(oc get pods -n openstack-operators -o name -l openstack.org/operator-name=watcher)\" == \"\" ]; then\n exit 0\n fi\n env_variables=$(oc set env $(oc get pods -n openstack-operators -o name -l openstack.org/operator-name=watcher) -n openstack-operators --list)\n counter=0\n for i in ${env_variables}; do\n if echo ${i} | grep '_URL_DEFAULT' &> /dev/null; then\n echo ${i}\n counter=$((counter + 1))\n fi\n done\n if [ ${counter} -lt 3 ]; then\n echo \"Error: Less than 3 _URL_DEFAULT variables found.\"\n exit 1\n else\n echo \"Success: ${counter} _URL_DEFAULT variables found.\"\n fi\n ]\n logger.go:42: 07:00:50 | watcher-tls/1-deploy-with-tlse | + oc exec -n watcher-kuttl-default openstackclient -- openstack service list -f value -c Name -c Type\n logger.go:42: 07:00:50 | watcher-tls/1-deploy-with-tlse | ++ grep -c '^watcher'\n logger.go:42: 07:00:53 | watcher-tls/1-deploy-with-tlse | + '[' 0 == 1 ']'\n logger.go:42: 07:00:55 | watcher-tls/1-deploy-with-tlse | running command: [sh -c set -euxo pipefail\n oc exec -n ${NAMESPACE} openstackclient -- openstack service list -f value -c Name -c Type |[ $(grep -c ^watcher) == 1 ]\n \ SERVICEID=$(oc exec -n ${NAMESPACE} openstackclient -- openstack service list -f value -c Name -c Type -c ID | grep watcher| awk '{print $1}')\n [ $(oc get -n ${NAMESPACE} keystoneservice watcher -o jsonpath={.status.serviceID}) == ${SERVICEID} ]\n [ -n \"$(oc get -n ${NAMESPACE} watcher watcher-kuttl -o jsonpath={.status.hash.dbsync})\" ]\n [ \"$(oc get -n ${NAMESPACE} secret watcher-kuttl-api-config-data -o jsonpath='{.data.my\\.cnf}'|base64 -d|grep -c 'ssl=1')\" == 1 ]\n [ \"$(oc get -n ${NAMESPACE} secret watcher-kuttl-api-config-data -o jsonpath='{.data.00-default\\.conf}'|base64 -d|grep -c 'cafile = /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem')\" == 2 ]\n # check that both endpoints have https set\n oc exec -n ${NAMESPACE} openstackclient -- openstack endpoint list | grep infra-optim | [ $(grep -c https) == 2 ]\n # If we are running the container locally, skip following test\n \ if [ \"$(oc get pods -n openstack-operators -o name -l openstack.org/operator-name=watcher)\" == \"\" ]; then\n exit 0\n fi\n env_variables=$(oc set env $(oc get pods -n openstack-operators -o name -l openstack.org/operator-name=watcher) -n openstack-operators --list)\n counter=0\n for i in ${env_variables}; do\n if echo ${i} | grep '_URL_DEFAULT' &> /dev/null; then\n echo ${i}\n counter=$((counter + 1))\n fi\n done\n if [ ${counter} -lt 3 ]; then\n echo \"Error: Less than 3 _URL_DEFAULT variables found.\"\n exit 1\n else\n echo \"Success: ${counter} _URL_DEFAULT variables found.\"\n fi\n ]\n logger.go:42: 07:00:55 | watcher-tls/1-deploy-with-tlse | + oc exec -n watcher-kuttl-default openstackclient -- openstack service list -f value -c Name -c Type\n logger.go:42: 07:00:55 | watcher-tls/1-deploy-with-tlse | ++ grep -c '^watcher'\n logger.go:42: 07:00:57 | watcher-tls/1-deploy-with-tlse | + '[' 0 == 1 ']'\n logger.go:42: 07:00:58 | watcher-tls/1-deploy-with-tlse | running command: [sh -c set -euxo pipefail\n oc exec -n ${NAMESPACE} openstackclient -- openstack service list -f value -c Name -c Type |[ $(grep -c ^watcher) == 1 ]\n \ SERVICEID=$(oc exec -n ${NAMESPACE} openstackclient -- openstack service list -f value -c Name -c Type -c ID | grep watcher| awk '{print $1}')\n [ $(oc get -n ${NAMESPACE} keystoneservice watcher -o jsonpath={.status.serviceID}) == ${SERVICEID} ]\n [ -n \"$(oc get -n ${NAMESPACE} watcher watcher-kuttl -o jsonpath={.status.hash.dbsync})\" ]\n [ \"$(oc get -n ${NAMESPACE} secret watcher-kuttl-api-config-data -o jsonpath='{.data.my\\.cnf}'|base64 -d|grep -c 'ssl=1')\" == 1 ]\n [ \"$(oc get -n ${NAMESPACE} secret watcher-kuttl-api-config-data -o jsonpath='{.data.00-default\\.conf}'|base64 -d|grep -c 'cafile = /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem')\" == 2 ]\n # check that both endpoints have https set\n oc exec -n ${NAMESPACE} openstackclient -- openstack endpoint list | grep infra-optim | [ $(grep -c https) == 2 ]\n # If we are running the container locally, skip following test\n \ if [ \"$(oc get pods -n openstack-operators -o name -l openstack.org/operator-name=watcher)\" == \"\" ]; then\n exit 0\n fi\n env_variables=$(oc set env $(oc get pods -n openstack-operators -o name -l openstack.org/operator-name=watcher) -n openstack-operators --list)\n counter=0\n for i in ${env_variables}; do\n if echo ${i} | grep '_URL_DEFAULT' &> /dev/null; then\n echo ${i}\n counter=$((counter + 1))\n fi\n done\n if [ ${counter} -lt 3 ]; then\n echo \"Error: Less than 3 _URL_DEFAULT variables found.\"\n exit 1\n else\n echo \"Success: ${counter} _URL_DEFAULT variables found.\"\n fi\n ]\n logger.go:42: 07:00:58 | watcher-tls/1-deploy-with-tlse | + oc exec -n watcher-kuttl-default openstackclient -- openstack service list -f value -c Name -c Type\n logger.go:42: 07:00:58 | watcher-tls/1-deploy-with-tlse | ++ grep -c '^watcher'\n logger.go:42: 07:01:00 | watcher-tls/1-deploy-with-tlse | + '[' 0 == 1 ']'\n logger.go:42: 07:01:01 | watcher-tls/1-deploy-with-tlse | running command: [sh -c set -euxo pipefail\n oc exec -n ${NAMESPACE} openstackclient -- openstack service list -f value -c Name -c Type |[ $(grep -c ^watcher) == 1 ]\n \ SERVICEID=$(oc exec -n ${NAMESPACE} openstackclient -- openstack service list -f value -c Name -c Type -c ID | grep watcher| awk '{print $1}')\n [ $(oc get -n ${NAMESPACE} keystoneservice watcher -o jsonpath={.status.serviceID}) == ${SERVICEID} ]\n [ -n \"$(oc get -n ${NAMESPACE} watcher watcher-kuttl -o jsonpath={.status.hash.dbsync})\" ]\n [ \"$(oc get -n ${NAMESPACE} secret watcher-kuttl-api-config-data -o jsonpath='{.data.my\\.cnf}'|base64 -d|grep -c 'ssl=1')\" == 1 ]\n [ \"$(oc get -n ${NAMESPACE} secret watcher-kuttl-api-config-data -o jsonpath='{.data.00-default\\.conf}'|base64 -d|grep -c 'cafile = /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem')\" == 2 ]\n # check that both endpoints have https set\n oc exec -n ${NAMESPACE} openstackclient -- openstack endpoint list | grep infra-optim | [ $(grep -c https) == 2 ]\n # If we are running the container locally, skip following test\n \ if [ \"$(oc get pods -n openstack-operators -o name -l openstack.org/operator-name=watcher)\" == \"\" ]; then\n exit 0\n fi\n env_variables=$(oc set env $(oc get pods -n openstack-operators -o name -l openstack.org/operator-name=watcher) -n openstack-operators --list)\n counter=0\n for i in ${env_variables}; do\n if echo ${i} | grep '_URL_DEFAULT' &> /dev/null; then\n echo ${i}\n counter=$((counter + 1))\n fi\n done\n if [ ${counter} -lt 3 ]; then\n echo \"Error: Less than 3 _URL_DEFAULT variables found.\"\n exit 1\n else\n echo \"Success: ${counter} _URL_DEFAULT variables found.\"\n fi\n ]\n logger.go:42: 07:01:01 | watcher-tls/1-deploy-with-tlse | + oc exec -n watcher-kuttl-default openstackclient -- openstack service list -f value -c Name -c Type\n logger.go:42: 07:01:01 | watcher-tls/1-deploy-with-tlse | ++ grep -c '^watcher'\n logger.go:42: 07:01:05 | watcher-tls/1-deploy-with-tlse | + '[' 1 == 1 ']'\n logger.go:42: 07:01:05 | watcher-tls/1-deploy-with-tlse | ++ oc exec -n watcher-kuttl-default openstackclient -- openstack service list -f value -c Name -c Type -c ID\n logger.go:42: 07:01:05 | watcher-tls/1-deploy-with-tlse | ++ awk '{print $1}'\n logger.go:42: 07:01:05 | watcher-tls/1-deploy-with-tlse | ++ grep watcher\n logger.go:42: 07:01:07 | watcher-tls/1-deploy-with-tlse | + SERVICEID=95cae6c02c914be89de8a64359351912\n logger.go:42: 07:01:07 | watcher-tls/1-deploy-with-tlse | ++ oc get -n watcher-kuttl-default keystoneservice watcher -o 'jsonpath={.status.serviceID}'\n \ logger.go:42: 07:01:07 | watcher-tls/1-deploy-with-tlse | + '[' 95cae6c02c914be89de8a64359351912 == 95cae6c02c914be89de8a64359351912 ']'\n logger.go:42: 07:01:07 | watcher-tls/1-deploy-with-tlse | ++ oc get -n watcher-kuttl-default watcher watcher-kuttl -o 'jsonpath={.status.hash.dbsync}'\n \ logger.go:42: 07:01:07 | watcher-tls/1-deploy-with-tlse | + '[' -n '' ']'\n \ logger.go:42: 07:01:09 | watcher-tls/1-deploy-with-tlse | running command: [sh -c set -euxo pipefail\n oc exec -n ${NAMESPACE} openstackclient -- openstack service list -f value -c Name -c Type |[ $(grep -c ^watcher) == 1 ]\n SERVICEID=$(oc exec -n ${NAMESPACE} openstackclient -- openstack service list -f value -c Name -c Type -c ID | grep watcher| awk '{print $1}')\n [ $(oc get -n ${NAMESPACE} keystoneservice watcher -o jsonpath={.status.serviceID}) == ${SERVICEID} ]\n [ -n \"$(oc get -n ${NAMESPACE} watcher watcher-kuttl -o jsonpath={.status.hash.dbsync})\" ]\n [ \"$(oc get -n ${NAMESPACE} secret watcher-kuttl-api-config-data -o jsonpath='{.data.my\\.cnf}'|base64 -d|grep -c 'ssl=1')\" == 1 ]\n [ \"$(oc get -n ${NAMESPACE} secret watcher-kuttl-api-config-data -o jsonpath='{.data.00-default\\.conf}'|base64 -d|grep -c 'cafile = /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem')\" == 2 ]\n # check that both endpoints have https set\n oc exec -n ${NAMESPACE} openstackclient -- openstack endpoint list | grep infra-optim | [ $(grep -c https) == 2 ]\n # If we are running the container locally, skip following test\n \ if [ \"$(oc get pods -n openstack-operators -o name -l openstack.org/operator-name=watcher)\" == \"\" ]; then\n exit 0\n fi\n env_variables=$(oc set env $(oc get pods -n openstack-operators -o name -l openstack.org/operator-name=watcher) -n openstack-operators --list)\n counter=0\n for i in ${env_variables}; do\n if echo ${i} | grep '_URL_DEFAULT' &> /dev/null; then\n echo ${i}\n counter=$((counter + 1))\n fi\n done\n if [ ${counter} -lt 3 ]; then\n echo \"Error: Less than 3 _URL_DEFAULT variables found.\"\n exit 1\n else\n echo \"Success: ${counter} _URL_DEFAULT variables found.\"\n fi\n ]\n logger.go:42: 07:01:09 | watcher-tls/1-deploy-with-tlse | + oc exec -n watcher-kuttl-default openstackclient -- openstack service list -f value -c Name -c Type\n logger.go:42: 07:01:09 | watcher-tls/1-deploy-with-tlse | ++ grep -c '^watcher'\n logger.go:42: 07:01:11 | watcher-tls/1-deploy-with-tlse | + '[' 1 == 1 ']'\n logger.go:42: 07:01:11 | watcher-tls/1-deploy-with-tlse | ++ oc exec -n watcher-kuttl-default openstackclient -- openstack service list -f value -c Name -c Type -c ID\n logger.go:42: 07:01:11 | watcher-tls/1-deploy-with-tlse | ++ awk '{print $1}'\n logger.go:42: 07:01:11 | watcher-tls/1-deploy-with-tlse | ++ grep watcher\n logger.go:42: 07:01:14 | watcher-tls/1-deploy-with-tlse | + SERVICEID=95cae6c02c914be89de8a64359351912\n logger.go:42: 07:01:14 | watcher-tls/1-deploy-with-tlse | ++ oc get -n watcher-kuttl-default keystoneservice watcher -o 'jsonpath={.status.serviceID}'\n \ logger.go:42: 07:01:14 | watcher-tls/1-deploy-with-tlse | + '[' 95cae6c02c914be89de8a64359351912 == 95cae6c02c914be89de8a64359351912 ']'\n logger.go:42: 07:01:14 | watcher-tls/1-deploy-with-tlse | ++ oc get -n watcher-kuttl-default watcher watcher-kuttl -o 'jsonpath={.status.hash.dbsync}'\n \ logger.go:42: 07:01:14 | watcher-tls/1-deploy-with-tlse | + '[' -n nbbh56bh4h647h5fbh67dh699h5bdh85hb7h65fh676h555h9bh5c4hf9h684h55dh674h564h565h6fh578hd9h56bh5fchcch687h68fh568h688h69q ']'\n logger.go:42: 07:01:14 | watcher-tls/1-deploy-with-tlse | ++ oc get -n watcher-kuttl-default secret watcher-kuttl-api-config-data -o 'jsonpath={.data.my\\.cnf}'\n \ logger.go:42: 07:01:14 | watcher-tls/1-deploy-with-tlse | ++ grep -c ssl=1\n \ logger.go:42: 07:01:14 | watcher-tls/1-deploy-with-tlse | ++ base64 -d\n logger.go:42: 07:01:14 | watcher-tls/1-deploy-with-tlse | + '[' 1 == 1 ']'\n logger.go:42: 07:01:14 | watcher-tls/1-deploy-with-tlse | ++ oc get -n watcher-kuttl-default secret watcher-kuttl-api-config-data -o 'jsonpath={.data.00-default\\.conf}'\n logger.go:42: 07:01:14 | watcher-tls/1-deploy-with-tlse | ++ grep -c 'cafile = /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem'\n \ logger.go:42: 07:01:14 | watcher-tls/1-deploy-with-tlse | ++ base64 -d\n logger.go:42: 07:01:14 | watcher-tls/1-deploy-with-tlse | + '[' 2 == 2 ']'\n logger.go:42: 07:01:14 | watcher-tls/1-deploy-with-tlse | + oc exec -n watcher-kuttl-default openstackclient -- openstack endpoint list\n logger.go:42: 07:01:14 | watcher-tls/1-deploy-with-tlse | + grep infra-optim\n logger.go:42: 07:01:14 | watcher-tls/1-deploy-with-tlse | ++ grep -c https\n logger.go:42: 07:01:17 | watcher-tls/1-deploy-with-tlse | + '[' 0 == 2 ']'\n logger.go:42: 07:01:18 | watcher-tls/1-deploy-with-tlse | running command: [sh -c set -euxo pipefail\n oc exec -n ${NAMESPACE} openstackclient -- openstack service list -f value -c Name -c Type |[ $(grep -c ^watcher) == 1 ]\n \ SERVICEID=$(oc exec -n ${NAMESPACE} openstackclient -- openstack service list -f value -c Name -c Type -c ID | grep watcher| awk '{print $1}')\n [ $(oc get -n ${NAMESPACE} keystoneservice watcher -o jsonpath={.status.serviceID}) == ${SERVICEID} ]\n [ -n \"$(oc get -n ${NAMESPACE} watcher watcher-kuttl -o jsonpath={.status.hash.dbsync})\" ]\n [ \"$(oc get -n ${NAMESPACE} secret watcher-kuttl-api-config-data -o jsonpath='{.data.my\\.cnf}'|base64 -d|grep -c 'ssl=1')\" == 1 ]\n [ \"$(oc get -n ${NAMESPACE} secret watcher-kuttl-api-config-data -o jsonpath='{.data.00-default\\.conf}'|base64 -d|grep -c 'cafile = /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem')\" == 2 ]\n # check that both endpoints have https set\n oc exec -n ${NAMESPACE} openstackclient -- openstack endpoint list | grep infra-optim | [ $(grep -c https) == 2 ]\n # If we are running the container locally, skip following test\n \ if [ \"$(oc get pods -n openstack-operators -o name -l openstack.org/operator-name=watcher)\" == \"\" ]; then\n exit 0\n fi\n env_variables=$(oc set env $(oc get pods -n openstack-operators -o name -l openstack.org/operator-name=watcher) -n openstack-operators --list)\n counter=0\n for i in ${env_variables}; do\n if echo ${i} | grep '_URL_DEFAULT' &> /dev/null; then\n echo ${i}\n counter=$((counter + 1))\n fi\n done\n if [ ${counter} -lt 3 ]; then\n echo \"Error: Less than 3 _URL_DEFAULT variables found.\"\n exit 1\n else\n echo \"Success: ${counter} _URL_DEFAULT variables found.\"\n fi\n ]\n logger.go:42: 07:01:18 | watcher-tls/1-deploy-with-tlse | + oc exec -n watcher-kuttl-default openstackclient -- openstack service list -f value -c Name -c Type\n logger.go:42: 07:01:18 | watcher-tls/1-deploy-with-tlse | ++ grep -c '^watcher'\n logger.go:42: 07:01:20 | watcher-tls/1-deploy-with-tlse | + '[' 1 == 1 ']'\n logger.go:42: 07:01:20 | watcher-tls/1-deploy-with-tlse | ++ grep watcher\n logger.go:42: 07:01:20 | watcher-tls/1-deploy-with-tlse | ++ oc exec -n watcher-kuttl-default openstackclient -- openstack service list -f value -c Name -c Type -c ID\n logger.go:42: 07:01:20 | watcher-tls/1-deploy-with-tlse | ++ awk '{print $1}'\n logger.go:42: 07:01:24 | watcher-tls/1-deploy-with-tlse | + SERVICEID=95cae6c02c914be89de8a64359351912\n logger.go:42: 07:01:24 | watcher-tls/1-deploy-with-tlse | ++ oc get -n watcher-kuttl-default keystoneservice watcher -o 'jsonpath={.status.serviceID}'\n \ logger.go:42: 07:01:24 | watcher-tls/1-deploy-with-tlse | + '[' 95cae6c02c914be89de8a64359351912 == 95cae6c02c914be89de8a64359351912 ']'\n logger.go:42: 07:01:24 | watcher-tls/1-deploy-with-tlse | ++ oc get -n watcher-kuttl-default watcher watcher-kuttl -o 'jsonpath={.status.hash.dbsync}'\n \ logger.go:42: 07:01:24 | watcher-tls/1-deploy-with-tlse | + '[' -n nbbh56bh4h647h5fbh67dh699h5bdh85hb7h65fh676h555h9bh5c4hf9h684h55dh674h564h565h6fh578hd9h56bh5fchcch687h68fh568h688h69q ']'\n logger.go:42: 07:01:24 | watcher-tls/1-deploy-with-tlse | ++ oc get -n watcher-kuttl-default secret watcher-kuttl-api-config-data -o 'jsonpath={.data.my\\.cnf}'\n \ logger.go:42: 07:01:24 | watcher-tls/1-deploy-with-tlse | ++ grep -c ssl=1\n \ logger.go:42: 07:01:24 | watcher-tls/1-deploy-with-tlse | ++ base64 -d\n logger.go:42: 07:01:24 | watcher-tls/1-deploy-with-tlse | + '[' 1 == 1 ']'\n logger.go:42: 07:01:24 | watcher-tls/1-deploy-with-tlse | ++ oc get -n watcher-kuttl-default secret watcher-kuttl-api-config-data -o 'jsonpath={.data.00-default\\.conf}'\n logger.go:42: 07:01:24 | watcher-tls/1-deploy-with-tlse | ++ grep -c 'cafile = /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem'\n \ logger.go:42: 07:01:24 | watcher-tls/1-deploy-with-tlse | ++ base64 -d\n logger.go:42: 07:01:24 | watcher-tls/1-deploy-with-tlse | + '[' 2 == 2 ']'\n logger.go:42: 07:01:24 | watcher-tls/1-deploy-with-tlse | + grep infra-optim\n logger.go:42: 07:01:24 | watcher-tls/1-deploy-with-tlse | + oc exec -n watcher-kuttl-default openstackclient -- openstack endpoint list\n logger.go:42: 07:01:24 | watcher-tls/1-deploy-with-tlse | ++ grep -c https\n logger.go:42: 07:01:27 | watcher-tls/1-deploy-with-tlse | + '[' 2 == 2 ']'\n logger.go:42: 07:01:27 | watcher-tls/1-deploy-with-tlse | ++ oc get pods -n openstack-operators -o name -l openstack.org/operator-name=watcher\n \ logger.go:42: 07:01:27 | watcher-tls/1-deploy-with-tlse | + '[' '' == '' ']'\n \ logger.go:42: 07:01:27 | watcher-tls/1-deploy-with-tlse | + exit 0\n logger.go:42: 07:01:28 | watcher-tls/1-deploy-with-tlse | running command: [sh -c set -euxo pipefail\n \ oc exec -n ${NAMESPACE} openstackclient -- openstack service list -f value -c Name -c Type |[ $(grep -c ^watcher) == 1 ]\n SERVICEID=$(oc exec -n ${NAMESPACE} openstackclient -- openstack service list -f value -c Name -c Type -c ID | grep watcher| awk '{print $1}')\n [ $(oc get -n ${NAMESPACE} keystoneservice watcher -o jsonpath={.status.serviceID}) == ${SERVICEID} ]\n [ -n \"$(oc get -n ${NAMESPACE} watcher watcher-kuttl -o jsonpath={.status.hash.dbsync})\" ]\n [ \"$(oc get -n ${NAMESPACE} secret watcher-kuttl-api-config-data -o jsonpath='{.data.my\\.cnf}'|base64 -d|grep -c 'ssl=1')\" == 1 ]\n [ \"$(oc get -n ${NAMESPACE} secret watcher-kuttl-api-config-data -o jsonpath='{.data.00-default\\.conf}'|base64 -d|grep -c 'cafile = /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem')\" == 2 ]\n # check that both endpoints have https set\n oc exec -n ${NAMESPACE} openstackclient -- openstack endpoint list | grep infra-optim | [ $(grep -c https) == 2 ]\n # If we are running the container locally, skip following test\n \ if [ \"$(oc get pods -n openstack-operators -o name -l openstack.org/operator-name=watcher)\" == \"\" ]; then\n exit 0\n fi\n env_variables=$(oc set env $(oc get pods -n openstack-operators -o name -l openstack.org/operator-name=watcher) -n openstack-operators --list)\n counter=0\n for i in ${env_variables}; do\n if echo ${i} | grep '_URL_DEFAULT' &> /dev/null; then\n echo ${i}\n counter=$((counter + 1))\n fi\n done\n if [ ${counter} -lt 3 ]; then\n echo \"Error: Less than 3 _URL_DEFAULT variables found.\"\n exit 1\n else\n echo \"Success: ${counter} _URL_DEFAULT variables found.\"\n fi\n ]\n logger.go:42: 07:01:28 | watcher-tls/1-deploy-with-tlse | + oc exec -n watcher-kuttl-default openstackclient -- openstack service list -f value -c Name -c Type\n logger.go:42: 07:01:28 | watcher-tls/1-deploy-with-tlse | ++ grep -c '^watcher'\n logger.go:42: 07:01:31 | watcher-tls/1-deploy-with-tlse | + '[' 1 == 1 ']'\n logger.go:42: 07:01:31 | watcher-tls/1-deploy-with-tlse | ++ oc exec -n watcher-kuttl-default openstackclient -- openstack service list -f value -c Name -c Type -c ID\n logger.go:42: 07:01:31 | watcher-tls/1-deploy-with-tlse | ++ awk '{print $1}'\n logger.go:42: 07:01:31 | watcher-tls/1-deploy-with-tlse | ++ grep watcher\n logger.go:42: 07:01:34 | watcher-tls/1-deploy-with-tlse | + SERVICEID=95cae6c02c914be89de8a64359351912\n logger.go:42: 07:01:34 | watcher-tls/1-deploy-with-tlse | ++ oc get -n watcher-kuttl-default keystoneservice watcher -o 'jsonpath={.status.serviceID}'\n \ logger.go:42: 07:01:34 | watcher-tls/1-deploy-with-tlse | + '[' 95cae6c02c914be89de8a64359351912 == 95cae6c02c914be89de8a64359351912 ']'\n logger.go:42: 07:01:34 | watcher-tls/1-deploy-with-tlse | ++ oc get -n watcher-kuttl-default watcher watcher-kuttl -o 'jsonpath={.status.hash.dbsync}'\n \ logger.go:42: 07:01:34 | watcher-tls/1-deploy-with-tlse | + '[' -n nbbh56bh4h647h5fbh67dh699h5bdh85hb7h65fh676h555h9bh5c4hf9h684h55dh674h564h565h6fh578hd9h56bh5fchcch687h68fh568h688h69q ']'\n logger.go:42: 07:01:34 | watcher-tls/1-deploy-with-tlse | ++ base64 -d\n \ logger.go:42: 07:01:34 | watcher-tls/1-deploy-with-tlse | ++ grep -c ssl=1\n \ logger.go:42: 07:01:34 | watcher-tls/1-deploy-with-tlse | ++ oc get -n watcher-kuttl-default secret watcher-kuttl-api-config-data -o 'jsonpath={.data.my\\.cnf}'\n logger.go:42: 07:01:34 | watcher-tls/1-deploy-with-tlse | + '[' 1 == 1 ']'\n logger.go:42: 07:01:34 | watcher-tls/1-deploy-with-tlse | ++ oc get -n watcher-kuttl-default secret watcher-kuttl-api-config-data -o 'jsonpath={.data.00-default\\.conf}'\n logger.go:42: 07:01:34 | watcher-tls/1-deploy-with-tlse | ++ grep -c 'cafile = /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem'\n \ logger.go:42: 07:01:34 | watcher-tls/1-deploy-with-tlse | ++ base64 -d\n logger.go:42: 07:01:34 | watcher-tls/1-deploy-with-tlse | + '[' 2 == 2 ']'\n logger.go:42: 07:01:34 | watcher-tls/1-deploy-with-tlse | + oc exec -n watcher-kuttl-default openstackclient -- openstack endpoint list\n logger.go:42: 07:01:34 | watcher-tls/1-deploy-with-tlse | + grep infra-optim\n logger.go:42: 07:01:34 | watcher-tls/1-deploy-with-tlse | ++ grep -c https\n logger.go:42: 07:01:37 | watcher-tls/1-deploy-with-tlse | + '[' 2 == 2 ']'\n logger.go:42: 07:01:37 | watcher-tls/1-deploy-with-tlse | ++ oc get pods -n openstack-operators -o name -l openstack.org/operator-name=watcher\n \ logger.go:42: 07:01:37 | watcher-tls/1-deploy-with-tlse | + '[' '' == '' ']'\n \ logger.go:42: 07:01:37 | watcher-tls/1-deploy-with-tlse | + exit 0\n logger.go:42: 07:01:37 | watcher-tls/1-deploy-with-tlse | test step completed 1-deploy-with-tlse\n \ logger.go:42: 07:01:37 | watcher-tls/2-patch-mtls | starting test step 2-patch-mtls\n \ logger.go:42: 07:01:37 | watcher-tls/2-patch-mtls | running command: [sh -c set -euxo pipefail\n oc patch oscp -n $NAMESPACE openstack --type='json' -p='[{\"op\": \"replace\", \"path\": \"/spec/memcached/templates/memcached/tls/mtls/sslVerifyMode\", \"value\": \"Request\"}]'\n ]\n logger.go:42: 07:01:37 | watcher-tls/2-patch-mtls | + oc patch oscp -n watcher-kuttl-default openstack --type=json '-p=[{\"op\": \"replace\", \"path\": \"/spec/memcached/templates/memcached/tls/mtls/sslVerifyMode\", \"value\": \"Request\"}]'\n logger.go:42: 07:01:37 | watcher-tls/2-patch-mtls | openstackcontrolplane.core.openstack.org/openstack patched\n logger.go:42: 07:01:37 | watcher-tls/2-patch-mtls | running command: [sh -c set -euxo pipefail\n \n oc project ${NAMESPACE}\n # Get pod names for each watcher service\n APIPOD=$(oc get pods -l service=watcher-api -o jsonpath='{.items[0].metadata.name}')\n APPLIERPOD=$(oc get pods -l service=watcher-applier -o jsonpath='{.items[0].metadata.name}')\n DECISIONENGINEPOD=$(oc get pods -l service=watcher-decision-engine -o jsonpath='{.items[0].metadata.name}')\n \n \ # Verify memcached mTLS config parameters in watcher-api config\n if [ -n \"${APIPOD}\" ]; then\n echo \"Checking watcher-api config...\"\n \ [ $(echo $(oc rsh -c watcher-api $APIPOD cat /etc/watcher/watcher.conf.d/00-default.conf) | grep -c \"memcache_tls_certfile = /etc/pki/tls/certs/mtls.crt \") == 1 ]\n [ $(echo $(oc rsh -c watcher-api $APIPOD cat /etc/watcher/watcher.conf.d/00-default.conf) | grep -c \"memcache_tls_keyfile = /etc/pki/tls/private/mtls.key\") == 1 ]\n [ $(echo $(oc rsh -c watcher-api $APIPOD cat /etc/watcher/watcher.conf.d/00-default.conf) | grep -c \"memcache_tls_cafile = /etc/pki/tls/certs/mtls-ca.crt\") == 1 ]\n [ $(echo $(oc rsh -c watcher-api $APIPOD cat /etc/watcher/watcher.conf.d/00-default.conf) | grep -c \"memcache_tls_enabled = true\") == 1 ]\n \n # Verify mTLS config parameters in memcached backend config\n [ $(echo $(oc rsh -c watcher-api $APIPOD cat /etc/watcher/watcher.conf.d/00-default.conf) | grep -c \"tls_certfile=/etc/pki/tls/certs/mtls.crt\") == 1 ]\n [ $(echo $(oc rsh -c watcher-api $APIPOD cat /etc/watcher/watcher.conf.d/00-default.conf) | grep -c \"tls_keyfile=/etc/pki/tls/private/mtls.key\") == 1 ]\n [ $(echo $(oc rsh -c watcher-api $APIPOD cat /etc/watcher/watcher.conf.d/00-default.conf) | grep -c \"tls_cafile=/etc/pki/tls/certs/mtls-ca.crt\") == 1 ]\n else\n exit 1\n fi\n \n # Verify memcached mTLS config parameters in watcher-applier config\n if [ -n \"${APPLIERPOD}\" ]; then\n echo \"Checking watcher-applier config...\"\n [ $(echo $(oc rsh -c watcher-applier $APPLIERPOD cat /etc/watcher/watcher.conf.d/00-default.conf) | grep -c \"memcache_tls_certfile = /etc/pki/tls/certs/mtls.crt \") == 1 ]\n [ $(echo $(oc rsh -c watcher-applier $APPLIERPOD cat /etc/watcher/watcher.conf.d/00-default.conf) | grep -c \"memcache_tls_keyfile = /etc/pki/tls/private/mtls.key\") == 1 ]\n [ $(echo $(oc rsh -c watcher-applier $APPLIERPOD cat /etc/watcher/watcher.conf.d/00-default.conf) | grep -c \"memcache_tls_cafile = /etc/pki/tls/certs/mtls-ca.crt\") == 1 ]\n [ $(echo $(oc rsh -c watcher-applier $APPLIERPOD cat /etc/watcher/watcher.conf.d/00-default.conf) | grep -c \"memcache_tls_enabled = true\") == 1 ]\n \n [ $(echo $(oc rsh -c watcher-applier $APPLIERPOD cat /etc/watcher/watcher.conf.d/00-default.conf) | grep -c \"tls_certfile=/etc/pki/tls/certs/mtls.crt\") == 1 ]\n [ $(echo $(oc rsh -c watcher-applier $APPLIERPOD cat /etc/watcher/watcher.conf.d/00-default.conf) | grep -c \"tls_keyfile=/etc/pki/tls/private/mtls.key\") == 1 ]\n [ $(echo $(oc rsh -c watcher-applier $APPLIERPOD cat /etc/watcher/watcher.conf.d/00-default.conf) | grep -c \"tls_cafile=/etc/pki/tls/certs/mtls-ca.crt\") == 1 ]\n else\n \ exit 1\n fi\n \n # Verify memcached mTLS config parameters in watcher-decision-engine config\n if [ -n \"${DECISIONENGINEPOD}\" ]; then\n \ echo \"Checking watcher-decision-engine config...\"\n [ $(echo $(oc rsh -c watcher-decision-engine $DECISIONENGINEPOD cat /etc/watcher/watcher.conf.d/00-default.conf) | grep -c \"memcache_tls_certfile = /etc/pki/tls/certs/mtls.crt \") == 1 ]\n [ $(echo $(oc rsh -c watcher-decision-engine $DECISIONENGINEPOD cat /etc/watcher/watcher.conf.d/00-default.conf) | grep -c \"memcache_tls_keyfile = /etc/pki/tls/private/mtls.key\") == 1 ]\n [ $(echo $(oc rsh -c watcher-decision-engine $DECISIONENGINEPOD cat /etc/watcher/watcher.conf.d/00-default.conf) | grep -c \"memcache_tls_cafile = /etc/pki/tls/certs/mtls-ca.crt\") == 1 ]\n [ $(echo $(oc rsh -c watcher-decision-engine $DECISIONENGINEPOD cat /etc/watcher/watcher.conf.d/00-default.conf) | grep -c \"memcache_tls_enabled = true\") == 1 ]\n \n [ $(echo $(oc rsh -c watcher-decision-engine $DECISIONENGINEPOD cat /etc/watcher/watcher.conf.d/00-default.conf) | grep -c \"tls_certfile=/etc/pki/tls/certs/mtls.crt\") == 1 ]\n [ $(echo $(oc rsh -c watcher-decision-engine $DECISIONENGINEPOD cat /etc/watcher/watcher.conf.d/00-default.conf) | grep -c \"tls_keyfile=/etc/pki/tls/private/mtls.key\") == 1 ]\n [ $(echo $(oc rsh -c watcher-decision-engine $DECISIONENGINEPOD cat /etc/watcher/watcher.conf.d/00-default.conf) | grep -c \"tls_cafile=/etc/pki/tls/certs/mtls-ca.crt\") == 1 ]\n else\n \ exit 1\n fi\n ]\n logger.go:42: 07:01:37 | watcher-tls/2-patch-mtls | + oc project watcher-kuttl-default\n logger.go:42: 07:01:37 | watcher-tls/2-patch-mtls | Already on project \"watcher-kuttl-default\" on server \"https://api.crc.testing:6443\".\n \ logger.go:42: 07:01:37 | watcher-tls/2-patch-mtls | ++ oc get pods -l service=watcher-api -o 'jsonpath={.items[0].metadata.name}'\n logger.go:42: 07:01:37 | watcher-tls/2-patch-mtls | + APIPOD=watcher-kuttl-api-0\n logger.go:42: 07:01:37 | watcher-tls/2-patch-mtls | ++ oc get pods -l service=watcher-applier -o 'jsonpath={.items[0].metadata.name}'\n \ logger.go:42: 07:01:38 | watcher-tls/2-patch-mtls | + APPLIERPOD=watcher-kuttl-applier-0\n \ logger.go:42: 07:01:38 | watcher-tls/2-patch-mtls | ++ oc get pods -l service=watcher-decision-engine -o 'jsonpath={.items[0].metadata.name}'\n logger.go:42: 07:01:38 | watcher-tls/2-patch-mtls | + DECISIONENGINEPOD=watcher-kuttl-decision-engine-0\n logger.go:42: 07:01:38 | watcher-tls/2-patch-mtls | + '[' -n watcher-kuttl-api-0 ']'\n logger.go:42: 07:01:38 | watcher-tls/2-patch-mtls | + echo 'Checking watcher-api config...'\n \ logger.go:42: 07:01:38 | watcher-tls/2-patch-mtls | Checking watcher-api config...\n \ logger.go:42: 07:01:38 | watcher-tls/2-patch-mtls | ++ grep -c 'memcache_tls_certfile = /etc/pki/tls/certs/mtls.crt '\n logger.go:42: 07:01:38 | watcher-tls/2-patch-mtls | +++ oc rsh -c watcher-api watcher-kuttl-api-0 cat /etc/watcher/watcher.conf.d/00-default.conf\n \ logger.go:42: 07:01:38 | watcher-tls/2-patch-mtls | ++ echo '[DEFAULT]' state_path = /var/lib/watcher transport_url = 'rabbit://**********=1' control_exchange = watcher debug = True log_file = /var/log/watcher/watcher-kuttl-api.log '#' empty notification_level means that no notification will be sent notification_level = '[database]' connection = 'mysql+pymysql://watcher_95d6:0107df271c530ebe460a742c5441b891@openstack.watcher-kuttl-default.svc/watcher?read_default_file=/etc/my.cnf' '[oslo_policy]' policy_file = /etc/watcher/policy.yaml.sample '[oslo_messaging_notifications]' driver = noop '[oslo_messaging_rabbit]' rabbit_quorum_queue=true rabbit_transient_quorum_queue=true amqp_durable_queues=true '[keystone_authtoken]' memcached_servers=memcached-0.memcached.watcher-kuttl-default.svc:11212 project_domain_name = Default project_name = service user_domain_name = Default password = pa**********rd username = watcher auth_url = https://keystone-internal.watcher-kuttl-default.svc:5000 interface = internal auth_type = password cafile = /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem '[watcher_clients_auth]' project_domain_name = Default project_name = service user_domain_name = Default password = pa**********rd username = watcher auth_url = https://keystone-internal.watcher-kuttl-default.svc:5000 interface = internal auth_type = password cafile = /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem '[oslo_concurrency]' lock_path = /var/lib/watcher/tmp '[watcher_datasources]' datasources = prometheus '[cache]' backend = oslo_cache.memcache_pool memcache_servers=memcached-0.memcached.watcher-kuttl-default.svc:11212 memcache_socket_timeout = 0.5 memcache_pool_connection_get_timeout = 1 enabled=true tls_enabled=true memcache_dead_retry = 30 '[prometheus_client]' host = metric-storage-prometheus.watcher-kuttl-default.svc port = 9090 cafile = /etc/pki/ca-trust/extracted/pem/prometheus/ca.crt '[cinder_client]' endpoint_type = internal '[glance_client]' endpoint_type = internal '[ironic_client]' endpoint_type = internal '[keystone_client]' interface = internal '[neutron_client]' endpoint_type = internal '[nova_client]' endpoint_type = internal '[placement_client]' interface = internal '[watcher_cluster_data_model_collectors.compute]' period = 900 '[watcher_cluster_data_model_collectors.baremetal]' period = 900 '[watcher_cluster_data_model_collectors.storage]' period = 900\n logger.go:42: 07:01:38 | watcher-tls/2-patch-mtls | + '[' 0 == 1 ']'\n logger.go:42: 07:01:39 | watcher-tls/2-patch-mtls | running command: [sh -c set -euxo pipefail\n \n oc project ${NAMESPACE}\n # Get pod names for each watcher service\n APIPOD=$(oc get pods -l service=watcher-api -o jsonpath='{.items[0].metadata.name}')\n APPLIERPOD=$(oc get pods -l service=watcher-applier -o jsonpath='{.items[0].metadata.name}')\n DECISIONENGINEPOD=$(oc get pods -l service=watcher-decision-engine -o jsonpath='{.items[0].metadata.name}')\n \n \ # Verify memcached mTLS config parameters in watcher-api config\n if [ -n \"${APIPOD}\" ]; then\n echo \"Checking watcher-api config...\"\n \ [ $(echo $(oc rsh -c watcher-api $APIPOD cat /etc/watcher/watcher.conf.d/00-default.conf) | grep -c \"memcache_tls_certfile = /etc/pki/tls/certs/mtls.crt \") == 1 ]\n [ $(echo $(oc rsh -c watcher-api $APIPOD cat /etc/watcher/watcher.conf.d/00-default.conf) | grep -c \"memcache_tls_keyfile = /etc/pki/tls/private/mtls.key\") == 1 ]\n [ $(echo $(oc rsh -c watcher-api $APIPOD cat /etc/watcher/watcher.conf.d/00-default.conf) | grep -c \"memcache_tls_cafile = /etc/pki/tls/certs/mtls-ca.crt\") == 1 ]\n [ $(echo $(oc rsh -c watcher-api $APIPOD cat /etc/watcher/watcher.conf.d/00-default.conf) | grep -c \"memcache_tls_enabled = true\") == 1 ]\n \n # Verify mTLS config parameters in memcached backend config\n [ $(echo $(oc rsh -c watcher-api $APIPOD cat /etc/watcher/watcher.conf.d/00-default.conf) | grep -c \"tls_certfile=/etc/pki/tls/certs/mtls.crt\") == 1 ]\n [ $(echo $(oc rsh -c watcher-api $APIPOD cat /etc/watcher/watcher.conf.d/00-default.conf) | grep -c \"tls_keyfile=/etc/pki/tls/private/mtls.key\") == 1 ]\n [ $(echo $(oc rsh -c watcher-api $APIPOD cat /etc/watcher/watcher.conf.d/00-default.conf) | grep -c \"tls_cafile=/etc/pki/tls/certs/mtls-ca.crt\") == 1 ]\n else\n exit 1\n fi\n \n # Verify memcached mTLS config parameters in watcher-applier config\n if [ -n \"${APPLIERPOD}\" ]; then\n echo \"Checking watcher-applier config...\"\n [ $(echo $(oc rsh -c watcher-applier $APPLIERPOD cat /etc/watcher/watcher.conf.d/00-default.conf) | grep -c \"memcache_tls_certfile = /etc/pki/tls/certs/mtls.crt \") == 1 ]\n [ $(echo $(oc rsh -c watcher-applier $APPLIERPOD cat /etc/watcher/watcher.conf.d/00-default.conf) | grep -c \"memcache_tls_keyfile = /etc/pki/tls/private/mtls.key\") == 1 ]\n [ $(echo $(oc rsh -c watcher-applier $APPLIERPOD cat /etc/watcher/watcher.conf.d/00-default.conf) | grep -c \"memcache_tls_cafile = /etc/pki/tls/certs/mtls-ca.crt\") == 1 ]\n [ $(echo $(oc rsh -c watcher-applier $APPLIERPOD cat /etc/watcher/watcher.conf.d/00-default.conf) | grep -c \"memcache_tls_enabled = true\") == 1 ]\n \n [ $(echo $(oc rsh -c watcher-applier $APPLIERPOD cat /etc/watcher/watcher.conf.d/00-default.conf) | grep -c \"tls_certfile=/etc/pki/tls/certs/mtls.crt\") == 1 ]\n [ $(echo $(oc rsh -c watcher-applier $APPLIERPOD cat /etc/watcher/watcher.conf.d/00-default.conf) | grep -c \"tls_keyfile=/etc/pki/tls/private/mtls.key\") == 1 ]\n [ $(echo $(oc rsh -c watcher-applier $APPLIERPOD cat /etc/watcher/watcher.conf.d/00-default.conf) | grep -c \"tls_cafile=/etc/pki/tls/certs/mtls-ca.crt\") == 1 ]\n else\n \ exit 1\n fi\n \n # Verify memcached mTLS config parameters in watcher-decision-engine config\n if [ -n \"${DECISIONENGINEPOD}\" ]; then\n \ echo \"Checking watcher-decision-engine config...\"\n [ $(echo $(oc rsh -c watcher-decision-engine $DECISIONENGINEPOD cat /etc/watcher/watcher.conf.d/00-default.conf) | grep -c \"memcache_tls_certfile = /etc/pki/tls/certs/mtls.crt \") == 1 ]\n [ $(echo $(oc rsh -c watcher-decision-engine $DECISIONENGINEPOD cat /etc/watcher/watcher.conf.d/00-default.conf) | grep -c \"memcache_tls_keyfile = /etc/pki/tls/private/mtls.key\") == 1 ]\n [ $(echo $(oc rsh -c watcher-decision-engine $DECISIONENGINEPOD cat /etc/watcher/watcher.conf.d/00-default.conf) | grep -c \"memcache_tls_cafile = /etc/pki/tls/certs/mtls-ca.crt\") == 1 ]\n [ $(echo $(oc rsh -c watcher-decision-engine $DECISIONENGINEPOD cat /etc/watcher/watcher.conf.d/00-default.conf) | grep -c \"memcache_tls_enabled = true\") == 1 ]\n \n [ $(echo $(oc rsh -c watcher-decision-engine $DECISIONENGINEPOD cat /etc/watcher/watcher.conf.d/00-default.conf) | grep -c \"tls_certfile=/etc/pki/tls/certs/mtls.crt\") == 1 ]\n [ $(echo $(oc rsh -c watcher-decision-engine $DECISIONENGINEPOD cat /etc/watcher/watcher.conf.d/00-default.conf) | grep -c \"tls_keyfile=/etc/pki/tls/private/mtls.key\") == 1 ]\n [ $(echo $(oc rsh -c watcher-decision-engine $DECISIONENGINEPOD cat /etc/watcher/watcher.conf.d/00-default.conf) | grep -c \"tls_cafile=/etc/pki/tls/certs/mtls-ca.crt\") == 1 ]\n else\n \ exit 1\n fi\n ]\n logger.go:42: 07:01:39 | watcher-tls/2-patch-mtls | + oc project watcher-kuttl-default\n logger.go:42: 07:01:40 | watcher-tls/2-patch-mtls | Already on project \"watcher-kuttl-default\" on server \"https://api.crc.testing:6443\".\n \ logger.go:42: 07:01:40 | watcher-tls/2-patch-mtls | ++ oc get pods -l service=watcher-api -o 'jsonpath={.items[0].metadata.name}'\n logger.go:42: 07:01:40 | watcher-tls/2-patch-mtls | + APIPOD=watcher-kuttl-api-0\n logger.go:42: 07:01:40 | watcher-tls/2-patch-mtls | ++ oc get pods -l service=watcher-applier -o 'jsonpath={.items[0].metadata.name}'\n \ logger.go:42: 07:01:40 | watcher-tls/2-patch-mtls | + APPLIERPOD=watcher-kuttl-applier-0\n \ logger.go:42: 07:01:40 | watcher-tls/2-patch-mtls | ++ oc get pods -l service=watcher-decision-engine -o 'jsonpath={.items[0].metadata.name}'\n logger.go:42: 07:01:40 | watcher-tls/2-patch-mtls | + DECISIONENGINEPOD=watcher-kuttl-decision-engine-0\n logger.go:42: 07:01:40 | watcher-tls/2-patch-mtls | + '[' -n watcher-kuttl-api-0 ']'\n logger.go:42: 07:01:40 | watcher-tls/2-patch-mtls | + echo 'Checking watcher-api config...'\n \ logger.go:42: 07:01:40 | watcher-tls/2-patch-mtls | Checking watcher-api config...\n \ logger.go:42: 07:01:40 | watcher-tls/2-patch-mtls | ++ grep -c 'memcache_tls_certfile = /etc/pki/tls/certs/mtls.crt '\n logger.go:42: 07:01:40 | watcher-tls/2-patch-mtls | +++ oc rsh -c watcher-api watcher-kuttl-api-0 cat /etc/watcher/watcher.conf.d/00-default.conf\n \ logger.go:42: 07:01:40 | watcher-tls/2-patch-mtls | error: unable to upgrade connection: container not found (\"watcher-api\")\n logger.go:42: 07:01:40 | watcher-tls/2-patch-mtls | ++ echo\n logger.go:42: 07:01:40 | watcher-tls/2-patch-mtls | + '[' 0 == 1 ']'\n logger.go:42: 07:01:41 | watcher-tls/2-patch-mtls | running command: [sh -c set -euxo pipefail\n \n oc project ${NAMESPACE}\n \ # Get pod names for each watcher service\n APIPOD=$(oc get pods -l service=watcher-api -o jsonpath='{.items[0].metadata.name}')\n APPLIERPOD=$(oc get pods -l service=watcher-applier -o jsonpath='{.items[0].metadata.name}')\n DECISIONENGINEPOD=$(oc get pods -l service=watcher-decision-engine -o jsonpath='{.items[0].metadata.name}')\n \ \n # Verify memcached mTLS config parameters in watcher-api config\n \ if [ -n \"${APIPOD}\" ]; then\n echo \"Checking watcher-api config...\"\n \ [ $(echo $(oc rsh -c watcher-api $APIPOD cat /etc/watcher/watcher.conf.d/00-default.conf) | grep -c \"memcache_tls_certfile = /etc/pki/tls/certs/mtls.crt \") == 1 ]\n [ $(echo $(oc rsh -c watcher-api $APIPOD cat /etc/watcher/watcher.conf.d/00-default.conf) | grep -c \"memcache_tls_keyfile = /etc/pki/tls/private/mtls.key\") == 1 ]\n [ $(echo $(oc rsh -c watcher-api $APIPOD cat /etc/watcher/watcher.conf.d/00-default.conf) | grep -c \"memcache_tls_cafile = /etc/pki/tls/certs/mtls-ca.crt\") == 1 ]\n [ $(echo $(oc rsh -c watcher-api $APIPOD cat /etc/watcher/watcher.conf.d/00-default.conf) | grep -c \"memcache_tls_enabled = true\") == 1 ]\n \n # Verify mTLS config parameters in memcached backend config\n [ $(echo $(oc rsh -c watcher-api $APIPOD cat /etc/watcher/watcher.conf.d/00-default.conf) | grep -c \"tls_certfile=/etc/pki/tls/certs/mtls.crt\") == 1 ]\n [ $(echo $(oc rsh -c watcher-api $APIPOD cat /etc/watcher/watcher.conf.d/00-default.conf) | grep -c \"tls_keyfile=/etc/pki/tls/private/mtls.key\") == 1 ]\n [ $(echo $(oc rsh -c watcher-api $APIPOD cat /etc/watcher/watcher.conf.d/00-default.conf) | grep -c \"tls_cafile=/etc/pki/tls/certs/mtls-ca.crt\") == 1 ]\n else\n exit 1\n fi\n \n # Verify memcached mTLS config parameters in watcher-applier config\n if [ -n \"${APPLIERPOD}\" ]; then\n echo \"Checking watcher-applier config...\"\n [ $(echo $(oc rsh -c watcher-applier $APPLIERPOD cat /etc/watcher/watcher.conf.d/00-default.conf) | grep -c \"memcache_tls_certfile = /etc/pki/tls/certs/mtls.crt \") == 1 ]\n [ $(echo $(oc rsh -c watcher-applier $APPLIERPOD cat /etc/watcher/watcher.conf.d/00-default.conf) | grep -c \"memcache_tls_keyfile = /etc/pki/tls/private/mtls.key\") == 1 ]\n [ $(echo $(oc rsh -c watcher-applier $APPLIERPOD cat /etc/watcher/watcher.conf.d/00-default.conf) | grep -c \"memcache_tls_cafile = /etc/pki/tls/certs/mtls-ca.crt\") == 1 ]\n [ $(echo $(oc rsh -c watcher-applier $APPLIERPOD cat /etc/watcher/watcher.conf.d/00-default.conf) | grep -c \"memcache_tls_enabled = true\") == 1 ]\n \n [ $(echo $(oc rsh -c watcher-applier $APPLIERPOD cat /etc/watcher/watcher.conf.d/00-default.conf) | grep -c \"tls_certfile=/etc/pki/tls/certs/mtls.crt\") == 1 ]\n [ $(echo $(oc rsh -c watcher-applier $APPLIERPOD cat /etc/watcher/watcher.conf.d/00-default.conf) | grep -c \"tls_keyfile=/etc/pki/tls/private/mtls.key\") == 1 ]\n [ $(echo $(oc rsh -c watcher-applier $APPLIERPOD cat /etc/watcher/watcher.conf.d/00-default.conf) | grep -c \"tls_cafile=/etc/pki/tls/certs/mtls-ca.crt\") == 1 ]\n else\n \ exit 1\n fi\n \n # Verify memcached mTLS config parameters in watcher-decision-engine config\n if [ -n \"${DECISIONENGINEPOD}\" ]; then\n \ echo \"Checking watcher-decision-engine config...\"\n [ $(echo $(oc rsh -c watcher-decision-engine $DECISIONENGINEPOD cat /etc/watcher/watcher.conf.d/00-default.conf) | grep -c \"memcache_tls_certfile = /etc/pki/tls/certs/mtls.crt \") == 1 ]\n [ $(echo $(oc rsh -c watcher-decision-engine $DECISIONENGINEPOD cat /etc/watcher/watcher.conf.d/00-default.conf) | grep -c \"memcache_tls_keyfile = /etc/pki/tls/private/mtls.key\") == 1 ]\n [ $(echo $(oc rsh -c watcher-decision-engine $DECISIONENGINEPOD cat /etc/watcher/watcher.conf.d/00-default.conf) | grep -c \"memcache_tls_cafile = /etc/pki/tls/certs/mtls-ca.crt\") == 1 ]\n [ $(echo $(oc rsh -c watcher-decision-engine $DECISIONENGINEPOD cat /etc/watcher/watcher.conf.d/00-default.conf) | grep -c \"memcache_tls_enabled = true\") == 1 ]\n \n [ $(echo $(oc rsh -c watcher-decision-engine $DECISIONENGINEPOD cat /etc/watcher/watcher.conf.d/00-default.conf) | grep -c \"tls_certfile=/etc/pki/tls/certs/mtls.crt\") == 1 ]\n [ $(echo $(oc rsh -c watcher-decision-engine $DECISIONENGINEPOD cat /etc/watcher/watcher.conf.d/00-default.conf) | grep -c \"tls_keyfile=/etc/pki/tls/private/mtls.key\") == 1 ]\n [ $(echo $(oc rsh -c watcher-decision-engine $DECISIONENGINEPOD cat /etc/watcher/watcher.conf.d/00-default.conf) | grep -c \"tls_cafile=/etc/pki/tls/certs/mtls-ca.crt\") == 1 ]\n else\n \ exit 1\n fi\n ]\n logger.go:42: 07:01:41 | watcher-tls/2-patch-mtls | + oc project watcher-kuttl-default\n logger.go:42: 07:01:41 | watcher-tls/2-patch-mtls | Already on project \"watcher-kuttl-default\" on server \"https://api.crc.testing:6443\".\n \ logger.go:42: 07:01:41 | watcher-tls/2-patch-mtls | ++ oc get pods -l service=watcher-api -o 'jsonpath={.items[0].metadata.name}'\n logger.go:42: 07:01:42 | watcher-tls/2-patch-mtls | + APIPOD=watcher-kuttl-api-0\n logger.go:42: 07:01:42 | watcher-tls/2-patch-mtls | ++ oc get pods -l service=watcher-applier -o 'jsonpath={.items[0].metadata.name}'\n \ logger.go:42: 07:01:42 | watcher-tls/2-patch-mtls | + APPLIERPOD=watcher-kuttl-applier-0\n \ logger.go:42: 07:01:42 | watcher-tls/2-patch-mtls | ++ oc get pods -l service=watcher-decision-engine -o 'jsonpath={.items[0].metadata.name}'\n logger.go:42: 07:01:42 | watcher-tls/2-patch-mtls | + DECISIONENGINEPOD=watcher-kuttl-decision-engine-0\n logger.go:42: 07:01:42 | watcher-tls/2-patch-mtls | + '[' -n watcher-kuttl-api-0 ']'\n logger.go:42: 07:01:42 | watcher-tls/2-patch-mtls | + echo 'Checking watcher-api config...'\n \ logger.go:42: 07:01:42 | watcher-tls/2-patch-mtls | Checking watcher-api config...\n \ logger.go:42: 07:01:42 | watcher-tls/2-patch-mtls | ++ grep -c 'memcache_tls_certfile = /etc/pki/tls/certs/mtls.crt '\n logger.go:42: 07:01:42 | watcher-tls/2-patch-mtls | +++ oc rsh -c watcher-api watcher-kuttl-api-0 cat /etc/watcher/watcher.conf.d/00-default.conf\n \ logger.go:42: 07:01:42 | watcher-tls/2-patch-mtls | error: unable to upgrade connection: container not found (\"watcher-api\")\n logger.go:42: 07:01:42 | watcher-tls/2-patch-mtls | ++ echo\n logger.go:42: 07:01:42 | watcher-tls/2-patch-mtls | + '[' 0 == 1 ']'\n logger.go:42: 07:01:43 | watcher-tls/2-patch-mtls | running command: [sh -c set -euxo pipefail\n \n oc project ${NAMESPACE}\n \ # Get pod names for each watcher service\n APIPOD=$(oc get pods -l service=watcher-api -o jsonpath='{.items[0].metadata.name}')\n APPLIERPOD=$(oc get pods -l service=watcher-applier -o jsonpath='{.items[0].metadata.name}')\n DECISIONENGINEPOD=$(oc get pods -l service=watcher-decision-engine -o jsonpath='{.items[0].metadata.name}')\n \ \n # Verify memcached mTLS config parameters in watcher-api config\n \ if [ -n \"${APIPOD}\" ]; then\n echo \"Checking watcher-api config...\"\n \ [ $(echo $(oc rsh -c watcher-api $APIPOD cat /etc/watcher/watcher.conf.d/00-default.conf) | grep -c \"memcache_tls_certfile = /etc/pki/tls/certs/mtls.crt \") == 1 ]\n [ $(echo $(oc rsh -c watcher-api $APIPOD cat /etc/watcher/watcher.conf.d/00-default.conf) | grep -c \"memcache_tls_keyfile = /etc/pki/tls/private/mtls.key\") == 1 ]\n [ $(echo $(oc rsh -c watcher-api $APIPOD cat /etc/watcher/watcher.conf.d/00-default.conf) | grep -c \"memcache_tls_cafile = /etc/pki/tls/certs/mtls-ca.crt\") == 1 ]\n [ $(echo $(oc rsh -c watcher-api $APIPOD cat /etc/watcher/watcher.conf.d/00-default.conf) | grep -c \"memcache_tls_enabled = true\") == 1 ]\n \n # Verify mTLS config parameters in memcached backend config\n [ $(echo $(oc rsh -c watcher-api $APIPOD cat /etc/watcher/watcher.conf.d/00-default.conf) | grep -c \"tls_certfile=/etc/pki/tls/certs/mtls.crt\") == 1 ]\n [ $(echo $(oc rsh -c watcher-api $APIPOD cat /etc/watcher/watcher.conf.d/00-default.conf) | grep -c \"tls_keyfile=/etc/pki/tls/private/mtls.key\") == 1 ]\n [ $(echo $(oc rsh -c watcher-api $APIPOD cat /etc/watcher/watcher.conf.d/00-default.conf) | grep -c \"tls_cafile=/etc/pki/tls/certs/mtls-ca.crt\") == 1 ]\n else\n exit 1\n fi\n \n # Verify memcached mTLS config parameters in watcher-applier config\n if [ -n \"${APPLIERPOD}\" ]; then\n echo \"Checking watcher-applier config...\"\n [ $(echo $(oc rsh -c watcher-applier $APPLIERPOD cat /etc/watcher/watcher.conf.d/00-default.conf) | grep -c \"memcache_tls_certfile = /etc/pki/tls/certs/mtls.crt \") == 1 ]\n [ $(echo $(oc rsh -c watcher-applier $APPLIERPOD cat /etc/watcher/watcher.conf.d/00-default.conf) | grep -c \"memcache_tls_keyfile = /etc/pki/tls/private/mtls.key\") == 1 ]\n [ $(echo $(oc rsh -c watcher-applier $APPLIERPOD cat /etc/watcher/watcher.conf.d/00-default.conf) | grep -c \"memcache_tls_cafile = /etc/pki/tls/certs/mtls-ca.crt\") == 1 ]\n [ $(echo $(oc rsh -c watcher-applier $APPLIERPOD cat /etc/watcher/watcher.conf.d/00-default.conf) | grep -c \"memcache_tls_enabled = true\") == 1 ]\n \n [ $(echo $(oc rsh -c watcher-applier $APPLIERPOD cat /etc/watcher/watcher.conf.d/00-default.conf) | grep -c \"tls_certfile=/etc/pki/tls/certs/mtls.crt\") == 1 ]\n [ $(echo $(oc rsh -c watcher-applier $APPLIERPOD cat /etc/watcher/watcher.conf.d/00-default.conf) | grep -c \"tls_keyfile=/etc/pki/tls/private/mtls.key\") == 1 ]\n [ $(echo $(oc rsh -c watcher-applier $APPLIERPOD cat /etc/watcher/watcher.conf.d/00-default.conf) | grep -c \"tls_cafile=/etc/pki/tls/certs/mtls-ca.crt\") == 1 ]\n else\n \ exit 1\n fi\n \n # Verify memcached mTLS config parameters in watcher-decision-engine config\n if [ -n \"${DECISIONENGINEPOD}\" ]; then\n \ echo \"Checking watcher-decision-engine config...\"\n [ $(echo $(oc rsh -c watcher-decision-engine $DECISIONENGINEPOD cat /etc/watcher/watcher.conf.d/00-default.conf) | grep -c \"memcache_tls_certfile = /etc/pki/tls/certs/mtls.crt \") == 1 ]\n [ $(echo $(oc rsh -c watcher-decision-engine $DECISIONENGINEPOD cat /etc/watcher/watcher.conf.d/00-default.conf) | grep -c \"memcache_tls_keyfile = /etc/pki/tls/private/mtls.key\") == 1 ]\n [ $(echo $(oc rsh -c watcher-decision-engine $DECISIONENGINEPOD cat /etc/watcher/watcher.conf.d/00-default.conf) | grep -c \"memcache_tls_cafile = /etc/pki/tls/certs/mtls-ca.crt\") == 1 ]\n [ $(echo $(oc rsh -c watcher-decision-engine $DECISIONENGINEPOD cat /etc/watcher/watcher.conf.d/00-default.conf) | grep -c \"memcache_tls_enabled = true\") == 1 ]\n \n [ $(echo $(oc rsh -c watcher-decision-engine $DECISIONENGINEPOD cat /etc/watcher/watcher.conf.d/00-default.conf) | grep -c \"tls_certfile=/etc/pki/tls/certs/mtls.crt\") == 1 ]\n [ $(echo $(oc rsh -c watcher-decision-engine $DECISIONENGINEPOD cat /etc/watcher/watcher.conf.d/00-default.conf) | grep -c \"tls_keyfile=/etc/pki/tls/private/mtls.key\") == 1 ]\n [ $(echo $(oc rsh -c watcher-decision-engine $DECISIONENGINEPOD cat /etc/watcher/watcher.conf.d/00-default.conf) | grep -c \"tls_cafile=/etc/pki/tls/certs/mtls-ca.crt\") == 1 ]\n else\n \ exit 1\n fi\n ]\n logger.go:42: 07:01:43 | watcher-tls/2-patch-mtls | + oc project watcher-kuttl-default\n logger.go:42: 07:01:44 | watcher-tls/2-patch-mtls | Already on project \"watcher-kuttl-default\" on server \"https://api.crc.testing:6443\".\n \ logger.go:42: 07:01:44 | watcher-tls/2-patch-mtls | ++ oc get pods -l service=watcher-api -o 'jsonpath={.items[0].metadata.name}'\n logger.go:42: 07:01:44 | watcher-tls/2-patch-mtls | + APIPOD=watcher-kuttl-api-0\n logger.go:42: 07:01:44 | watcher-tls/2-patch-mtls | ++ oc get pods -l service=watcher-applier -o 'jsonpath={.items[0].metadata.name}'\n \ logger.go:42: 07:01:44 | watcher-tls/2-patch-mtls | + APPLIERPOD=watcher-kuttl-applier-0\n \ logger.go:42: 07:01:44 | watcher-tls/2-patch-mtls | ++ oc get pods -l service=watcher-decision-engine -o 'jsonpath={.items[0].metadata.name}'\n logger.go:42: 07:01:44 | watcher-tls/2-patch-mtls | + DECISIONENGINEPOD=watcher-kuttl-decision-engine-0\n logger.go:42: 07:01:44 | watcher-tls/2-patch-mtls | + '[' -n watcher-kuttl-api-0 ']'\n logger.go:42: 07:01:44 | watcher-tls/2-patch-mtls | + echo 'Checking watcher-api config...'\n \ logger.go:42: 07:01:44 | watcher-tls/2-patch-mtls | Checking watcher-api config...\n \ logger.go:42: 07:01:44 | watcher-tls/2-patch-mtls | ++ grep -c 'memcache_tls_certfile = /etc/pki/tls/certs/mtls.crt '\n logger.go:42: 07:01:44 | watcher-tls/2-patch-mtls | +++ oc rsh -c watcher-api watcher-kuttl-api-0 cat /etc/watcher/watcher.conf.d/00-default.conf\n \ logger.go:42: 07:01:44 | watcher-tls/2-patch-mtls | error: unable to upgrade connection: container not found (\"watcher-api\")\n logger.go:42: 07:01:44 | watcher-tls/2-patch-mtls | ++ echo\n logger.go:42: 07:01:44 | watcher-tls/2-patch-mtls | + '[' 0 == 1 ']'\n logger.go:42: 07:01:45 | watcher-tls/2-patch-mtls | running command: [sh -c set -euxo pipefail\n \n oc project ${NAMESPACE}\n \ # Get pod names for each watcher service\n APIPOD=$(oc get pods -l service=watcher-api -o jsonpath='{.items[0].metadata.name}')\n APPLIERPOD=$(oc get pods -l service=watcher-applier -o jsonpath='{.items[0].metadata.name}')\n DECISIONENGINEPOD=$(oc get pods -l service=watcher-decision-engine -o jsonpath='{.items[0].metadata.name}')\n \ \n # Verify memcached mTLS config parameters in watcher-api config\n \ if [ -n \"${APIPOD}\" ]; then\n echo \"Checking watcher-api config...\"\n \ [ $(echo $(oc rsh -c watcher-api $APIPOD cat /etc/watcher/watcher.conf.d/00-default.conf) | grep -c \"memcache_tls_certfile = /etc/pki/tls/certs/mtls.crt \") == 1 ]\n [ $(echo $(oc rsh -c watcher-api $APIPOD cat /etc/watcher/watcher.conf.d/00-default.conf) | grep -c \"memcache_tls_keyfile = /etc/pki/tls/private/mtls.key\") == 1 ]\n [ $(echo $(oc rsh -c watcher-api $APIPOD cat /etc/watcher/watcher.conf.d/00-default.conf) | grep -c \"memcache_tls_cafile = /etc/pki/tls/certs/mtls-ca.crt\") == 1 ]\n [ $(echo $(oc rsh -c watcher-api $APIPOD cat /etc/watcher/watcher.conf.d/00-default.conf) | grep -c \"memcache_tls_enabled = true\") == 1 ]\n \n # Verify mTLS config parameters in memcached backend config\n [ $(echo $(oc rsh -c watcher-api $APIPOD cat /etc/watcher/watcher.conf.d/00-default.conf) | grep -c \"tls_certfile=/etc/pki/tls/certs/mtls.crt\") == 1 ]\n [ $(echo $(oc rsh -c watcher-api $APIPOD cat /etc/watcher/watcher.conf.d/00-default.conf) | grep -c \"tls_keyfile=/etc/pki/tls/private/mtls.key\") == 1 ]\n [ $(echo $(oc rsh -c watcher-api $APIPOD cat /etc/watcher/watcher.conf.d/00-default.conf) | grep -c \"tls_cafile=/etc/pki/tls/certs/mtls-ca.crt\") == 1 ]\n else\n exit 1\n fi\n \n # Verify memcached mTLS config parameters in watcher-applier config\n if [ -n \"${APPLIERPOD}\" ]; then\n echo \"Checking watcher-applier config...\"\n [ $(echo $(oc rsh -c watcher-applier $APPLIERPOD cat /etc/watcher/watcher.conf.d/00-default.conf) | grep -c \"memcache_tls_certfile = /etc/pki/tls/certs/mtls.crt \") == 1 ]\n [ $(echo $(oc rsh -c watcher-applier $APPLIERPOD cat /etc/watcher/watcher.conf.d/00-default.conf) | grep -c \"memcache_tls_keyfile = /etc/pki/tls/private/mtls.key\") == 1 ]\n [ $(echo $(oc rsh -c watcher-applier $APPLIERPOD cat /etc/watcher/watcher.conf.d/00-default.conf) | grep -c \"memcache_tls_cafile = /etc/pki/tls/certs/mtls-ca.crt\") == 1 ]\n [ $(echo $(oc rsh -c watcher-applier $APPLIERPOD cat /etc/watcher/watcher.conf.d/00-default.conf) | grep -c \"memcache_tls_enabled = true\") == 1 ]\n \n [ $(echo $(oc rsh -c watcher-applier $APPLIERPOD cat /etc/watcher/watcher.conf.d/00-default.conf) | grep -c \"tls_certfile=/etc/pki/tls/certs/mtls.crt\") == 1 ]\n [ $(echo $(oc rsh -c watcher-applier $APPLIERPOD cat /etc/watcher/watcher.conf.d/00-default.conf) | grep -c \"tls_keyfile=/etc/pki/tls/private/mtls.key\") == 1 ]\n [ $(echo $(oc rsh -c watcher-applier $APPLIERPOD cat /etc/watcher/watcher.conf.d/00-default.conf) | grep -c \"tls_cafile=/etc/pki/tls/certs/mtls-ca.crt\") == 1 ]\n else\n \ exit 1\n fi\n \n # Verify memcached mTLS config parameters in watcher-decision-engine config\n if [ -n \"${DECISIONENGINEPOD}\" ]; then\n \ echo \"Checking watcher-decision-engine config...\"\n [ $(echo $(oc rsh -c watcher-decision-engine $DECISIONENGINEPOD cat /etc/watcher/watcher.conf.d/00-default.conf) | grep -c \"memcache_tls_certfile = /etc/pki/tls/certs/mtls.crt \") == 1 ]\n [ $(echo $(oc rsh -c watcher-decision-engine $DECISIONENGINEPOD cat /etc/watcher/watcher.conf.d/00-default.conf) | grep -c \"memcache_tls_keyfile = /etc/pki/tls/private/mtls.key\") == 1 ]\n [ $(echo $(oc rsh -c watcher-decision-engine $DECISIONENGINEPOD cat /etc/watcher/watcher.conf.d/00-default.conf) | grep -c \"memcache_tls_cafile = /etc/pki/tls/certs/mtls-ca.crt\") == 1 ]\n [ $(echo $(oc rsh -c watcher-decision-engine $DECISIONENGINEPOD cat /etc/watcher/watcher.conf.d/00-default.conf) | grep -c \"memcache_tls_enabled = true\") == 1 ]\n \n [ $(echo $(oc rsh -c watcher-decision-engine $DECISIONENGINEPOD cat /etc/watcher/watcher.conf.d/00-default.conf) | grep -c \"tls_certfile=/etc/pki/tls/certs/mtls.crt\") == 1 ]\n [ $(echo $(oc rsh -c watcher-decision-engine $DECISIONENGINEPOD cat /etc/watcher/watcher.conf.d/00-default.conf) | grep -c \"tls_keyfile=/etc/pki/tls/private/mtls.key\") == 1 ]\n [ $(echo $(oc rsh -c watcher-decision-engine $DECISIONENGINEPOD cat /etc/watcher/watcher.conf.d/00-default.conf) | grep -c \"tls_cafile=/etc/pki/tls/certs/mtls-ca.crt\") == 1 ]\n else\n \ exit 1\n fi\n ]\n logger.go:42: 07:01:46 | watcher-tls/2-patch-mtls | + oc project watcher-kuttl-default\n logger.go:42: 07:01:46 | watcher-tls/2-patch-mtls | Already on project \"watcher-kuttl-default\" on server \"https://api.crc.testing:6443\".\n \ logger.go:42: 07:01:46 | watcher-tls/2-patch-mtls | ++ oc get pods -l service=watcher-api -o 'jsonpath={.items[0].metadata.name}'\n logger.go:42: 07:01:46 | watcher-tls/2-patch-mtls | + APIPOD=watcher-kuttl-api-0\n logger.go:42: 07:01:46 | watcher-tls/2-patch-mtls | ++ oc get pods -l service=watcher-applier -o 'jsonpath={.items[0].metadata.name}'\n \ logger.go:42: 07:01:46 | watcher-tls/2-patch-mtls | + APPLIERPOD=watcher-kuttl-applier-0\n \ logger.go:42: 07:01:46 | watcher-tls/2-patch-mtls | ++ oc get pods -l service=watcher-decision-engine -o 'jsonpath={.items[0].metadata.name}'\n logger.go:42: 07:01:46 | watcher-tls/2-patch-mtls | + DECISIONENGINEPOD=watcher-kuttl-decision-engine-0\n logger.go:42: 07:01:46 | watcher-tls/2-patch-mtls | + '[' -n watcher-kuttl-api-0 ']'\n logger.go:42: 07:01:46 | watcher-tls/2-patch-mtls | + echo 'Checking watcher-api config...'\n \ logger.go:42: 07:01:46 | watcher-tls/2-patch-mtls | Checking watcher-api config...\n \ logger.go:42: 07:01:46 | watcher-tls/2-patch-mtls | ++ grep -c 'memcache_tls_certfile = /etc/pki/tls/certs/mtls.crt '\n logger.go:42: 07:01:46 | watcher-tls/2-patch-mtls | +++ oc rsh -c watcher-api watcher-kuttl-api-0 cat /etc/watcher/watcher.conf.d/00-default.conf\n \ logger.go:42: 07:01:46 | watcher-tls/2-patch-mtls | ++ echo '[DEFAULT]' state_path = /var/lib/watcher transport_url = 'rabbit://**********=1' control_exchange = watcher debug = True log_file = /var/log/watcher/watcher-kuttl-api.log '#' empty notification_level means that no notification will be sent notification_level = '[database]' connection = 'mysql+pymysql://watcher_95d6:0107df271c530ebe460a742c5441b891@openstack.watcher-kuttl-default.svc/watcher?read_default_file=/etc/my.cnf' '[oslo_policy]' policy_file = /etc/watcher/policy.yaml.sample '[oslo_messaging_notifications]' driver = noop '[oslo_messaging_rabbit]' rabbit_quorum_queue=true rabbit_transient_quorum_queue=true amqp_durable_queues=true '[keystone_authtoken]' memcached_servers=memcached-0.memcached.watcher-kuttl-default.svc:11212 tls_enabled=true memcache_tls_certfile = /etc/pki/tls/certs/mtls.crt memcache_tls_keyfile = /etc/pki/tls/private/mtls.key memcache_tls_cafile = /etc/pki/tls/certs/mtls-ca.crt memcache_tls_enabled = true project_domain_name = Default project_name = service user_domain_name = Default password = pa**********rd username = watcher auth_url = https://keystone-internal.watcher-kuttl-default.svc:5000 interface = internal auth_type = password cafile = /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem '[watcher_clients_auth]' project_domain_name = Default project_name = service user_domain_name = Default password = pa**********rd username = watcher auth_url = https://keystone-internal.watcher-kuttl-default.svc:5000 interface = internal auth_type = password cafile = /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem '[oslo_concurrency]' lock_path = /var/lib/watcher/tmp '[watcher_datasources]' datasources = prometheus '[cache]' backend = oslo_cache.memcache_pool memcache_servers=memcached-0.memcached.watcher-kuttl-default.svc:11212 memcache_socket_timeout = 0.5 memcache_pool_connection_get_timeout = 1 enabled=true tls_enabled=true tls_certfile=/etc/pki/tls/certs/mtls.crt tls_keyfile=/etc/pki/tls/private/mtls.key tls_cafile=/etc/pki/tls/certs/mtls-ca.crt memcache_dead_retry = 30 '[prometheus_client]' host = metric-storage-prometheus.watcher-kuttl-default.svc port = 9090 cafile = /etc/pki/ca-trust/extracted/pem/prometheus/ca.crt '[cinder_client]' endpoint_type = internal '[glance_client]' endpoint_type = internal '[ironic_client]' endpoint_type = internal '[keystone_client]' interface = internal '[neutron_client]' endpoint_type = internal '[nova_client]' endpoint_type = internal '[placement_client]' interface = internal '[watcher_cluster_data_model_collectors.compute]' period = 900 '[watcher_cluster_data_model_collectors.baremetal]' period = 900 '[watcher_cluster_data_model_collectors.storage]' period = 900\n logger.go:42: 07:01:46 | watcher-tls/2-patch-mtls | + '[' 1 == 1 ']'\n logger.go:42: 07:01:46 | watcher-tls/2-patch-mtls | ++ grep -c 'memcache_tls_keyfile = /etc/pki/tls/private/mtls.key'\n \ logger.go:42: 07:01:46 | watcher-tls/2-patch-mtls | +++ oc rsh -c watcher-api watcher-kuttl-api-0 cat /etc/watcher/watcher.conf.d/00-default.conf\n logger.go:42: 07:01:47 | watcher-tls/2-patch-mtls | ++ echo '[DEFAULT]' state_path = /var/lib/watcher transport_url = 'rabbit://**********=1' control_exchange = watcher debug = True log_file = /var/log/watcher/watcher-kuttl-api.log '#' empty notification_level means that no notification will be sent notification_level = '[database]' connection = 'mysql+pymysql://watcher_95d6:0107df271c530ebe460a742c5441b891@openstack.watcher-kuttl-default.svc/watcher?read_default_file=/etc/my.cnf' '[oslo_policy]' policy_file = /etc/watcher/policy.yaml.sample '[oslo_messaging_notifications]' driver = noop '[oslo_messaging_rabbit]' rabbit_quorum_queue=true rabbit_transient_quorum_queue=true amqp_durable_queues=true '[keystone_authtoken]' memcached_servers=memcached-0.memcached.watcher-kuttl-default.svc:11212 tls_enabled=true memcache_tls_certfile = /etc/pki/tls/certs/mtls.crt memcache_tls_keyfile = /etc/pki/tls/private/mtls.key memcache_tls_cafile = /etc/pki/tls/certs/mtls-ca.crt memcache_tls_enabled = true project_domain_name = Default project_name = service user_domain_name = Default password = pa**********rd username = watcher auth_url = https://keystone-internal.watcher-kuttl-default.svc:5000 interface = internal auth_type = password cafile = /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem '[watcher_clients_auth]' project_domain_name = Default project_name = service user_domain_name = Default password = pa**********rd username = watcher auth_url = https://keystone-internal.watcher-kuttl-default.svc:5000 interface = internal auth_type = password cafile = /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem '[oslo_concurrency]' lock_path = /var/lib/watcher/tmp '[watcher_datasources]' datasources = prometheus '[cache]' backend = oslo_cache.memcache_pool memcache_servers=memcached-0.memcached.watcher-kuttl-default.svc:11212 memcache_socket_timeout = 0.5 memcache_pool_connection_get_timeout = 1 enabled=true tls_enabled=true tls_certfile=/etc/pki/tls/certs/mtls.crt tls_keyfile=/etc/pki/tls/private/mtls.key tls_cafile=/etc/pki/tls/certs/mtls-ca.crt memcache_dead_retry = 30 '[prometheus_client]' host = metric-storage-prometheus.watcher-kuttl-default.svc port = 9090 cafile = /etc/pki/ca-trust/extracted/pem/prometheus/ca.crt '[cinder_client]' endpoint_type = internal '[glance_client]' endpoint_type = internal '[ironic_client]' endpoint_type = internal '[keystone_client]' interface = internal '[neutron_client]' endpoint_type = internal '[nova_client]' endpoint_type = internal '[placement_client]' interface = internal '[watcher_cluster_data_model_collectors.compute]' period = 900 '[watcher_cluster_data_model_collectors.baremetal]' period = 900 '[watcher_cluster_data_model_collectors.storage]' period = 900\n logger.go:42: 07:01:47 | watcher-tls/2-patch-mtls | + '[' 1 == 1 ']'\n logger.go:42: 07:01:47 | watcher-tls/2-patch-mtls | ++ grep -c 'memcache_tls_cafile = /etc/pki/tls/certs/mtls-ca.crt'\n \ logger.go:42: 07:01:47 | watcher-tls/2-patch-mtls | +++ oc rsh -c watcher-api watcher-kuttl-api-0 cat /etc/watcher/watcher.conf.d/00-default.conf\n logger.go:42: 07:01:47 | watcher-tls/2-patch-mtls | ++ echo '[DEFAULT]' state_path = /var/lib/watcher transport_url = 'rabbit://**********=1' control_exchange = watcher debug = True log_file = /var/log/watcher/watcher-kuttl-api.log '#' empty notification_level means that no notification will be sent notification_level = '[database]' connection = 'mysql+pymysql://watcher_95d6:0107df271c530ebe460a742c5441b891@openstack.watcher-kuttl-default.svc/watcher?read_default_file=/etc/my.cnf' '[oslo_policy]' policy_file = /etc/watcher/policy.yaml.sample '[oslo_messaging_notifications]' driver = noop '[oslo_messaging_rabbit]' rabbit_quorum_queue=true rabbit_transient_quorum_queue=true amqp_durable_queues=true '[keystone_authtoken]' memcached_servers=memcached-0.memcached.watcher-kuttl-default.svc:11212 tls_enabled=true memcache_tls_certfile = /etc/pki/tls/certs/mtls.crt memcache_tls_keyfile = /etc/pki/tls/private/mtls.key memcache_tls_cafile = /etc/pki/tls/certs/mtls-ca.crt memcache_tls_enabled = true project_domain_name = Default project_name = service user_domain_name = Default password = pa**********rd username = watcher auth_url = https://keystone-internal.watcher-kuttl-default.svc:5000 interface = internal auth_type = password cafile = /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem '[watcher_clients_auth]' project_domain_name = Default project_name = service user_domain_name = Default password = pa**********rd username = watcher auth_url = https://keystone-internal.watcher-kuttl-default.svc:5000 interface = internal auth_type = password cafile = /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem '[oslo_concurrency]' lock_path = /var/lib/watcher/tmp '[watcher_datasources]' datasources = prometheus '[cache]' backend = oslo_cache.memcache_pool memcache_servers=memcached-0.memcached.watcher-kuttl-default.svc:11212 memcache_socket_timeout = 0.5 memcache_pool_connection_get_timeout = 1 enabled=true tls_enabled=true tls_certfile=/etc/pki/tls/certs/mtls.crt tls_keyfile=/etc/pki/tls/private/mtls.key tls_cafile=/etc/pki/tls/certs/mtls-ca.crt memcache_dead_retry = 30 '[prometheus_client]' host = metric-storage-prometheus.watcher-kuttl-default.svc port = 9090 cafile = /etc/pki/ca-trust/extracted/pem/prometheus/ca.crt '[cinder_client]' endpoint_type = internal '[glance_client]' endpoint_type = internal '[ironic_client]' endpoint_type = internal '[keystone_client]' interface = internal '[neutron_client]' endpoint_type = internal '[nova_client]' endpoint_type = internal '[placement_client]' interface = internal '[watcher_cluster_data_model_collectors.compute]' period = 900 '[watcher_cluster_data_model_collectors.baremetal]' period = 900 '[watcher_cluster_data_model_collectors.storage]' period = 900\n logger.go:42: 07:01:47 | watcher-tls/2-patch-mtls | + '[' 1 == 1 ']'\n logger.go:42: 07:01:47 | watcher-tls/2-patch-mtls | ++ grep -c 'memcache_tls_enabled = true'\n logger.go:42: 07:01:47 | watcher-tls/2-patch-mtls | +++ oc rsh -c watcher-api watcher-kuttl-api-0 cat /etc/watcher/watcher.conf.d/00-default.conf\n logger.go:42: 07:01:47 | watcher-tls/2-patch-mtls | ++ echo '[DEFAULT]' state_path = /var/lib/watcher transport_url = 'rabbit://**********=1' control_exchange = watcher debug = True log_file = /var/log/watcher/watcher-kuttl-api.log '#' empty notification_level means that no notification will be sent notification_level = '[database]' connection = 'mysql+pymysql://watcher_95d6:0107df271c530ebe460a742c5441b891@openstack.watcher-kuttl-default.svc/watcher?read_default_file=/etc/my.cnf' '[oslo_policy]' policy_file = /etc/watcher/policy.yaml.sample '[oslo_messaging_notifications]' driver = noop '[oslo_messaging_rabbit]' rabbit_quorum_queue=true rabbit_transient_quorum_queue=true amqp_durable_queues=true '[keystone_authtoken]' memcached_servers=memcached-0.memcached.watcher-kuttl-default.svc:11212 tls_enabled=true memcache_tls_certfile = /etc/pki/tls/certs/mtls.crt memcache_tls_keyfile = /etc/pki/tls/private/mtls.key memcache_tls_cafile = /etc/pki/tls/certs/mtls-ca.crt memcache_tls_enabled = true project_domain_name = Default project_name = service user_domain_name = Default password = pa**********rd username = watcher auth_url = https://keystone-internal.watcher-kuttl-default.svc:5000 interface = internal auth_type = password cafile = /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem '[watcher_clients_auth]' project_domain_name = Default project_name = service user_domain_name = Default password = pa**********rd username = watcher auth_url = https://keystone-internal.watcher-kuttl-default.svc:5000 interface = internal auth_type = password cafile = /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem '[oslo_concurrency]' lock_path = /var/lib/watcher/tmp '[watcher_datasources]' datasources = prometheus '[cache]' backend = oslo_cache.memcache_pool memcache_servers=memcached-0.memcached.watcher-kuttl-default.svc:11212 memcache_socket_timeout = 0.5 memcache_pool_connection_get_timeout = 1 enabled=true tls_enabled=true tls_certfile=/etc/pki/tls/certs/mtls.crt tls_keyfile=/etc/pki/tls/private/mtls.key tls_cafile=/etc/pki/tls/certs/mtls-ca.crt memcache_dead_retry = 30 '[prometheus_client]' host = metric-storage-prometheus.watcher-kuttl-default.svc port = 9090 cafile = /etc/pki/ca-trust/extracted/pem/prometheus/ca.crt '[cinder_client]' endpoint_type = internal '[glance_client]' endpoint_type = internal '[ironic_client]' endpoint_type = internal '[keystone_client]' interface = internal '[neutron_client]' endpoint_type = internal '[nova_client]' endpoint_type = internal '[placement_client]' interface = internal '[watcher_cluster_data_model_collectors.compute]' period = 900 '[watcher_cluster_data_model_collectors.baremetal]' period = 900 '[watcher_cluster_data_model_collectors.storage]' period = 900\n logger.go:42: 07:01:47 | watcher-tls/2-patch-mtls | + '[' 1 == 1 ']'\n logger.go:42: 07:01:47 | watcher-tls/2-patch-mtls | ++ grep -c tls_certfile=/etc/pki/tls/certs/mtls.crt\n \ logger.go:42: 07:01:47 | watcher-tls/2-patch-mtls | +++ oc rsh -c watcher-api watcher-kuttl-api-0 cat /etc/watcher/watcher.conf.d/00-default.conf\n logger.go:42: 07:01:48 | watcher-tls/2-patch-mtls | ++ echo '[DEFAULT]' state_path = /var/lib/watcher transport_url = 'rabbit://**********=1' control_exchange = watcher debug = True log_file = /var/log/watcher/watcher-kuttl-api.log '#' empty notification_level means that no notification will be sent notification_level = '[database]' connection = 'mysql+pymysql://watcher_95d6:0107df271c530ebe460a742c5441b891@openstack.watcher-kuttl-default.svc/watcher?read_default_file=/etc/my.cnf' '[oslo_policy]' policy_file = /etc/watcher/policy.yaml.sample '[oslo_messaging_notifications]' driver = noop '[oslo_messaging_rabbit]' rabbit_quorum_queue=true rabbit_transient_quorum_queue=true amqp_durable_queues=true '[keystone_authtoken]' memcached_servers=memcached-0.memcached.watcher-kuttl-default.svc:11212 tls_enabled=true memcache_tls_certfile = /etc/pki/tls/certs/mtls.crt memcache_tls_keyfile = /etc/pki/tls/private/mtls.key memcache_tls_cafile = /etc/pki/tls/certs/mtls-ca.crt memcache_tls_enabled = true project_domain_name = Default project_name = service user_domain_name = Default password = pa**********rd username = watcher auth_url = https://keystone-internal.watcher-kuttl-default.svc:5000 interface = internal auth_type = password cafile = /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem '[watcher_clients_auth]' project_domain_name = Default project_name = service user_domain_name = Default password = pa**********rd username = watcher auth_url = https://keystone-internal.watcher-kuttl-default.svc:5000 interface = internal auth_type = password cafile = /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem '[oslo_concurrency]' lock_path = /var/lib/watcher/tmp '[watcher_datasources]' datasources = prometheus '[cache]' backend = oslo_cache.memcache_pool memcache_servers=memcached-0.memcached.watcher-kuttl-default.svc:11212 memcache_socket_timeout = 0.5 memcache_pool_connection_get_timeout = 1 enabled=true tls_enabled=true tls_certfile=/etc/pki/tls/certs/mtls.crt tls_keyfile=/etc/pki/tls/private/mtls.key tls_cafile=/etc/pki/tls/certs/mtls-ca.crt memcache_dead_retry = 30 '[prometheus_client]' host = metric-storage-prometheus.watcher-kuttl-default.svc port = 9090 cafile = /etc/pki/ca-trust/extracted/pem/prometheus/ca.crt '[cinder_client]' endpoint_type = internal '[glance_client]' endpoint_type = internal '[ironic_client]' endpoint_type = internal '[keystone_client]' interface = internal '[neutron_client]' endpoint_type = internal '[nova_client]' endpoint_type = internal '[placement_client]' interface = internal '[watcher_cluster_data_model_collectors.compute]' period = 900 '[watcher_cluster_data_model_collectors.baremetal]' period = 900 '[watcher_cluster_data_model_collectors.storage]' period = 900\n logger.go:42: 07:01:48 | watcher-tls/2-patch-mtls | + '[' 1 == 1 ']'\n logger.go:42: 07:01:48 | watcher-tls/2-patch-mtls | ++ grep -c tls_keyfile=/etc/pki/tls/private/mtls.key\n \ logger.go:42: 07:01:48 | watcher-tls/2-patch-mtls | +++ oc rsh -c watcher-api watcher-kuttl-api-0 cat /etc/watcher/watcher.conf.d/00-default.conf\n logger.go:42: 07:01:48 | watcher-tls/2-patch-mtls | ++ echo '[DEFAULT]' state_path = /var/lib/watcher transport_url = 'rabbit://**********=1' control_exchange = watcher debug = True log_file = /var/log/watcher/watcher-kuttl-api.log '#' empty notification_level means that no notification will be sent notification_level = '[database]' connection = 'mysql+pymysql://watcher_95d6:0107df271c530ebe460a742c5441b891@openstack.watcher-kuttl-default.svc/watcher?read_default_file=/etc/my.cnf' '[oslo_policy]' policy_file = /etc/watcher/policy.yaml.sample '[oslo_messaging_notifications]' driver = noop '[oslo_messaging_rabbit]' rabbit_quorum_queue=true rabbit_transient_quorum_queue=true amqp_durable_queues=true '[keystone_authtoken]' memcached_servers=memcached-0.memcached.watcher-kuttl-default.svc:11212 tls_enabled=true memcache_tls_certfile = /etc/pki/tls/certs/mtls.crt memcache_tls_keyfile = /etc/pki/tls/private/mtls.key memcache_tls_cafile = /etc/pki/tls/certs/mtls-ca.crt memcache_tls_enabled = true project_domain_name = Default project_name = service user_domain_name = Default password = pa**********rd username = watcher auth_url = https://keystone-internal.watcher-kuttl-default.svc:5000 interface = internal auth_type = password cafile = /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem '[watcher_clients_auth]' project_domain_name = Default project_name = service user_domain_name = Default password = pa**********rd username = watcher auth_url = https://keystone-internal.watcher-kuttl-default.svc:5000 interface = internal auth_type = password cafile = /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem '[oslo_concurrency]' lock_path = /var/lib/watcher/tmp '[watcher_datasources]' datasources = prometheus '[cache]' backend = oslo_cache.memcache_pool memcache_servers=memcached-0.memcached.watcher-kuttl-default.svc:11212 memcache_socket_timeout = 0.5 memcache_pool_connection_get_timeout = 1 enabled=true tls_enabled=true tls_certfile=/etc/pki/tls/certs/mtls.crt tls_keyfile=/etc/pki/tls/private/mtls.key tls_cafile=/etc/pki/tls/certs/mtls-ca.crt memcache_dead_retry = 30 '[prometheus_client]' host = metric-storage-prometheus.watcher-kuttl-default.svc port = 9090 cafile = /etc/pki/ca-trust/extracted/pem/prometheus/ca.crt '[cinder_client]' endpoint_type = internal '[glance_client]' endpoint_type = internal '[ironic_client]' endpoint_type = internal '[keystone_client]' interface = internal '[neutron_client]' endpoint_type = internal '[nova_client]' endpoint_type = internal '[placement_client]' interface = internal '[watcher_cluster_data_model_collectors.compute]' period = 900 '[watcher_cluster_data_model_collectors.baremetal]' period = 900 '[watcher_cluster_data_model_collectors.storage]' period = 900\n logger.go:42: 07:01:48 | watcher-tls/2-patch-mtls | + '[' 1 == 1 ']'\n logger.go:42: 07:01:48 | watcher-tls/2-patch-mtls | ++ grep -c tls_cafile=/etc/pki/tls/certs/mtls-ca.crt\n \ logger.go:42: 07:01:48 | watcher-tls/2-patch-mtls | +++ oc rsh -c watcher-api watcher-kuttl-api-0 cat /etc/watcher/watcher.conf.d/00-default.conf\n logger.go:42: 07:01:48 | watcher-tls/2-patch-mtls | ++ echo '[DEFAULT]' state_path = /var/lib/watcher transport_url = 'rabbit://**********=1' control_exchange = watcher debug = True log_file = /var/log/watcher/watcher-kuttl-api.log '#' empty notification_level means that no notification will be sent notification_level = '[database]' connection = 'mysql+pymysql://watcher_95d6:0107df271c530ebe460a742c5441b891@openstack.watcher-kuttl-default.svc/watcher?read_default_file=/etc/my.cnf' '[oslo_policy]' policy_file = /etc/watcher/policy.yaml.sample '[oslo_messaging_notifications]' driver = noop '[oslo_messaging_rabbit]' rabbit_quorum_queue=true rabbit_transient_quorum_queue=true amqp_durable_queues=true '[keystone_authtoken]' memcached_servers=memcached-0.memcached.watcher-kuttl-default.svc:11212 tls_enabled=true memcache_tls_certfile = /etc/pki/tls/certs/mtls.crt memcache_tls_keyfile = /etc/pki/tls/private/mtls.key memcache_tls_cafile = /etc/pki/tls/certs/mtls-ca.crt memcache_tls_enabled = true project_domain_name = Default project_name = service user_domain_name = Default password = pa**********rd username = watcher auth_url = https://keystone-internal.watcher-kuttl-default.svc:5000 interface = internal auth_type = password cafile = /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem '[watcher_clients_auth]' project_domain_name = Default project_name = service user_domain_name = Default password = pa**********rd username = watcher auth_url = https://keystone-internal.watcher-kuttl-default.svc:5000 interface = internal auth_type = password cafile = /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem '[oslo_concurrency]' lock_path = /var/lib/watcher/tmp '[watcher_datasources]' datasources = prometheus '[cache]' backend = oslo_cache.memcache_pool memcache_servers=memcached-0.memcached.watcher-kuttl-default.svc:11212 memcache_socket_timeout = 0.5 memcache_pool_connection_get_timeout = 1 enabled=true tls_enabled=true tls_certfile=/etc/pki/tls/certs/mtls.crt tls_keyfile=/etc/pki/tls/private/mtls.key tls_cafile=/etc/pki/tls/certs/mtls-ca.crt memcache_dead_retry = 30 '[prometheus_client]' host = metric-storage-prometheus.watcher-kuttl-default.svc port = 9090 cafile = /etc/pki/ca-trust/extracted/pem/prometheus/ca.crt '[cinder_client]' endpoint_type = internal '[glance_client]' endpoint_type = internal '[ironic_client]' endpoint_type = internal '[keystone_client]' interface = internal '[neutron_client]' endpoint_type = internal '[nova_client]' endpoint_type = internal '[placement_client]' interface = internal '[watcher_cluster_data_model_collectors.compute]' period = 900 '[watcher_cluster_data_model_collectors.baremetal]' period = 900 '[watcher_cluster_data_model_collectors.storage]' period = 900\n logger.go:42: 07:01:48 | watcher-tls/2-patch-mtls | + '[' 1 == 1 ']'\n logger.go:42: 07:01:48 | watcher-tls/2-patch-mtls | + '[' -n watcher-kuttl-applier-0 ']'\n logger.go:42: 07:01:48 | watcher-tls/2-patch-mtls | + echo 'Checking watcher-applier config...'\n \ logger.go:42: 07:01:48 | watcher-tls/2-patch-mtls | Checking watcher-applier config...\n logger.go:42: 07:01:48 | watcher-tls/2-patch-mtls | ++ grep -c 'memcache_tls_certfile = /etc/pki/tls/certs/mtls.crt '\n logger.go:42: 07:01:48 | watcher-tls/2-patch-mtls | +++ oc rsh -c watcher-applier watcher-kuttl-applier-0 cat /etc/watcher/watcher.conf.d/00-default.conf\n \ logger.go:42: 07:01:48 | watcher-tls/2-patch-mtls | ++ echo '[DEFAULT]' state_path = /var/lib/watcher transport_url = 'rabbit://**********=1' control_exchange = watcher debug = True '#' empty notification_level means that no notification will be sent notification_level = '[database]' connection = 'mysql+pymysql://watcher_95d6:0107df271c530ebe460a742c5441b891@openstack.watcher-kuttl-default.svc/watcher?read_default_file=/etc/my.cnf' '[oslo_policy]' policy_file = /etc/watcher/policy.yaml.sample '[oslo_messaging_notifications]' driver = noop '[oslo_messaging_rabbit]' rabbit_quorum_queue=true rabbit_transient_quorum_queue=true amqp_durable_queues=true '[keystone_authtoken]' memcached_servers=memcached-0.memcached.watcher-kuttl-default.svc:11212 tls_enabled=true memcache_tls_certfile = /etc/pki/tls/certs/mtls.crt memcache_tls_keyfile = /etc/pki/tls/private/mtls.key memcache_tls_cafile = /etc/pki/tls/certs/mtls-ca.crt memcache_tls_enabled = true project_domain_name = Default project_name = service user_domain_name = Default password = pa**********rd username = watcher auth_url = https://keystone-internal.watcher-kuttl-default.svc:5000 interface = internal auth_type = password cafile = /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem '[watcher_clients_auth]' project_domain_name = Default project_name = service user_domain_name = Default password = pa**********rd username = watcher auth_url = https://keystone-internal.watcher-kuttl-default.svc:5000 interface = internal auth_type = password cafile = /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem '[oslo_concurrency]' lock_path = /var/lib/watcher/tmp '[watcher_datasources]' datasources = prometheus '[cache]' backend = oslo_cache.memcache_pool memcache_servers=memcached-0.memcached.watcher-kuttl-default.svc:11212 memcache_socket_timeout = 0.5 memcache_pool_connection_get_timeout = 1 enabled=true tls_enabled=true tls_certfile=/etc/pki/tls/certs/mtls.crt tls_keyfile=/etc/pki/tls/private/mtls.key tls_cafile=/etc/pki/tls/certs/mtls-ca.crt memcache_dead_retry = 30 '[cinder_client]' endpoint_type = internal '[glance_client]' endpoint_type = internal '[ironic_client]' endpoint_type = internal '[keystone_client]' interface = internal '[neutron_client]' endpoint_type = internal '[nova_client]' endpoint_type = internal '[placement_client]' interface = internal '[watcher_cluster_data_model_collectors.compute]' period = 900 '[watcher_cluster_data_model_collectors.baremetal]' period = 900 '[watcher_cluster_data_model_collectors.storage]' period = 900\n logger.go:42: 07:01:48 | watcher-tls/2-patch-mtls | + '[' 1 == 1 ']'\n logger.go:42: 07:01:48 | watcher-tls/2-patch-mtls | ++ grep -c 'memcache_tls_keyfile = /etc/pki/tls/private/mtls.key'\n logger.go:42: 07:01:48 | watcher-tls/2-patch-mtls | +++ oc rsh -c watcher-applier watcher-kuttl-applier-0 cat /etc/watcher/watcher.conf.d/00-default.conf\n \ logger.go:42: 07:01:49 | watcher-tls/2-patch-mtls | ++ echo '[DEFAULT]' state_path = /var/lib/watcher transport_url = 'rabbit://**********=1' control_exchange = watcher debug = True '#' empty notification_level means that no notification will be sent notification_level = '[database]' connection = 'mysql+pymysql://watcher_95d6:0107df271c530ebe460a742c5441b891@openstack.watcher-kuttl-default.svc/watcher?read_default_file=/etc/my.cnf' '[oslo_policy]' policy_file = /etc/watcher/policy.yaml.sample '[oslo_messaging_notifications]' driver = noop '[oslo_messaging_rabbit]' rabbit_quorum_queue=true rabbit_transient_quorum_queue=true amqp_durable_queues=true '[keystone_authtoken]' memcached_servers=memcached-0.memcached.watcher-kuttl-default.svc:11212 tls_enabled=true memcache_tls_certfile = /etc/pki/tls/certs/mtls.crt memcache_tls_keyfile = /etc/pki/tls/private/mtls.key memcache_tls_cafile = /etc/pki/tls/certs/mtls-ca.crt memcache_tls_enabled = true project_domain_name = Default project_name = service user_domain_name = Default password = pa**********rd username = watcher auth_url = https://keystone-internal.watcher-kuttl-default.svc:5000 interface = internal auth_type = password cafile = /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem '[watcher_clients_auth]' project_domain_name = Default project_name = service user_domain_name = Default password = pa**********rd username = watcher auth_url = https://keystone-internal.watcher-kuttl-default.svc:5000 interface = internal auth_type = password cafile = /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem '[oslo_concurrency]' lock_path = /var/lib/watcher/tmp '[watcher_datasources]' datasources = prometheus '[cache]' backend = oslo_cache.memcache_pool memcache_servers=memcached-0.memcached.watcher-kuttl-default.svc:11212 memcache_socket_timeout = 0.5 memcache_pool_connection_get_timeout = 1 enabled=true tls_enabled=true tls_certfile=/etc/pki/tls/certs/mtls.crt tls_keyfile=/etc/pki/tls/private/mtls.key tls_cafile=/etc/pki/tls/certs/mtls-ca.crt memcache_dead_retry = 30 '[cinder_client]' endpoint_type = internal '[glance_client]' endpoint_type = internal '[ironic_client]' endpoint_type = internal '[keystone_client]' interface = internal '[neutron_client]' endpoint_type = internal '[nova_client]' endpoint_type = internal '[placement_client]' interface = internal '[watcher_cluster_data_model_collectors.compute]' period = 900 '[watcher_cluster_data_model_collectors.baremetal]' period = 900 '[watcher_cluster_data_model_collectors.storage]' period = 900\n logger.go:42: 07:01:49 | watcher-tls/2-patch-mtls | + '[' 1 == 1 ']'\n logger.go:42: 07:01:49 | watcher-tls/2-patch-mtls | ++ grep -c 'memcache_tls_cafile = /etc/pki/tls/certs/mtls-ca.crt'\n logger.go:42: 07:01:49 | watcher-tls/2-patch-mtls | +++ oc rsh -c watcher-applier watcher-kuttl-applier-0 cat /etc/watcher/watcher.conf.d/00-default.conf\n \ logger.go:42: 07:01:49 | watcher-tls/2-patch-mtls | ++ echo '[DEFAULT]' state_path = /var/lib/watcher transport_url = 'rabbit://**********=1' control_exchange = watcher debug = True '#' empty notification_level means that no notification will be sent notification_level = '[database]' connection = 'mysql+pymysql://watcher_95d6:0107df271c530ebe460a742c5441b891@openstack.watcher-kuttl-default.svc/watcher?read_default_file=/etc/my.cnf' '[oslo_policy]' policy_file = /etc/watcher/policy.yaml.sample '[oslo_messaging_notifications]' driver = noop '[oslo_messaging_rabbit]' rabbit_quorum_queue=true rabbit_transient_quorum_queue=true amqp_durable_queues=true '[keystone_authtoken]' memcached_servers=memcached-0.memcached.watcher-kuttl-default.svc:11212 tls_enabled=true memcache_tls_certfile = /etc/pki/tls/certs/mtls.crt memcache_tls_keyfile = /etc/pki/tls/private/mtls.key memcache_tls_cafile = /etc/pki/tls/certs/mtls-ca.crt memcache_tls_enabled = true project_domain_name = Default project_name = service user_domain_name = Default password = pa**********rd username = watcher auth_url = https://keystone-internal.watcher-kuttl-default.svc:5000 interface = internal auth_type = password cafile = /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem '[watcher_clients_auth]' project_domain_name = Default project_name = service user_domain_name = Default password = pa**********rd username = watcher auth_url = https://keystone-internal.watcher-kuttl-default.svc:5000 interface = internal auth_type = password cafile = /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem '[oslo_concurrency]' lock_path = /var/lib/watcher/tmp '[watcher_datasources]' datasources = prometheus '[cache]' backend = oslo_cache.memcache_pool memcache_servers=memcached-0.memcached.watcher-kuttl-default.svc:11212 memcache_socket_timeout = 0.5 memcache_pool_connection_get_timeout = 1 enabled=true tls_enabled=true tls_certfile=/etc/pki/tls/certs/mtls.crt tls_keyfile=/etc/pki/tls/private/mtls.key tls_cafile=/etc/pki/tls/certs/mtls-ca.crt memcache_dead_retry = 30 '[cinder_client]' endpoint_type = internal '[glance_client]' endpoint_type = internal '[ironic_client]' endpoint_type = internal '[keystone_client]' interface = internal '[neutron_client]' endpoint_type = internal '[nova_client]' endpoint_type = internal '[placement_client]' interface = internal '[watcher_cluster_data_model_collectors.compute]' period = 900 '[watcher_cluster_data_model_collectors.baremetal]' period = 900 '[watcher_cluster_data_model_collectors.storage]' period = 900\n logger.go:42: 07:01:49 | watcher-tls/2-patch-mtls | + '[' 1 == 1 ']'\n logger.go:42: 07:01:49 | watcher-tls/2-patch-mtls | ++ grep -c 'memcache_tls_enabled = true'\n logger.go:42: 07:01:49 | watcher-tls/2-patch-mtls | +++ oc rsh -c watcher-applier watcher-kuttl-applier-0 cat /etc/watcher/watcher.conf.d/00-default.conf\n logger.go:42: 07:01:49 | watcher-tls/2-patch-mtls | ++ echo '[DEFAULT]' state_path = /var/lib/watcher transport_url = 'rabbit://**********=1' control_exchange = watcher debug = True '#' empty notification_level means that no notification will be sent notification_level = '[database]' connection = 'mysql+pymysql://watcher_95d6:0107df271c530ebe460a742c5441b891@openstack.watcher-kuttl-default.svc/watcher?read_default_file=/etc/my.cnf' '[oslo_policy]' policy_file = /etc/watcher/policy.yaml.sample '[oslo_messaging_notifications]' driver = noop '[oslo_messaging_rabbit]' rabbit_quorum_queue=true rabbit_transient_quorum_queue=true amqp_durable_queues=true '[keystone_authtoken]' memcached_servers=memcached-0.memcached.watcher-kuttl-default.svc:11212 tls_enabled=true memcache_tls_certfile = /etc/pki/tls/certs/mtls.crt memcache_tls_keyfile = /etc/pki/tls/private/mtls.key memcache_tls_cafile = /etc/pki/tls/certs/mtls-ca.crt memcache_tls_enabled = true project_domain_name = Default project_name = service user_domain_name = Default password = pa**********rd username = watcher auth_url = https://keystone-internal.watcher-kuttl-default.svc:5000 interface = internal auth_type = password cafile = /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem '[watcher_clients_auth]' project_domain_name = Default project_name = service user_domain_name = Default password = pa**********rd username = watcher auth_url = https://keystone-internal.watcher-kuttl-default.svc:5000 interface = internal auth_type = password cafile = /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem '[oslo_concurrency]' lock_path = /var/lib/watcher/tmp '[watcher_datasources]' datasources = prometheus '[cache]' backend = oslo_cache.memcache_pool memcache_servers=memcached-0.memcached.watcher-kuttl-default.svc:11212 memcache_socket_timeout = 0.5 memcache_pool_connection_get_timeout = 1 enabled=true tls_enabled=true tls_certfile=/etc/pki/tls/certs/mtls.crt tls_keyfile=/etc/pki/tls/private/mtls.key tls_cafile=/etc/pki/tls/certs/mtls-ca.crt memcache_dead_retry = 30 '[cinder_client]' endpoint_type = internal '[glance_client]' endpoint_type = internal '[ironic_client]' endpoint_type = internal '[keystone_client]' interface = internal '[neutron_client]' endpoint_type = internal '[nova_client]' endpoint_type = internal '[placement_client]' interface = internal '[watcher_cluster_data_model_collectors.compute]' period = 900 '[watcher_cluster_data_model_collectors.baremetal]' period = 900 '[watcher_cluster_data_model_collectors.storage]' period = 900\n logger.go:42: 07:01:49 | watcher-tls/2-patch-mtls | + '[' 1 == 1 ']'\n logger.go:42: 07:01:49 | watcher-tls/2-patch-mtls | ++ grep -c tls_certfile=/etc/pki/tls/certs/mtls.crt\n \ logger.go:42: 07:01:49 | watcher-tls/2-patch-mtls | +++ oc rsh -c watcher-applier watcher-kuttl-applier-0 cat /etc/watcher/watcher.conf.d/00-default.conf\n logger.go:42: 07:01:49 | watcher-tls/2-patch-mtls | ++ echo '[DEFAULT]' state_path = /var/lib/watcher transport_url = 'rabbit://**********=1' control_exchange = watcher debug = True '#' empty notification_level means that no notification will be sent notification_level = '[database]' connection = 'mysql+pymysql://watcher_95d6:0107df271c530ebe460a742c5441b891@openstack.watcher-kuttl-default.svc/watcher?read_default_file=/etc/my.cnf' '[oslo_policy]' policy_file = /etc/watcher/policy.yaml.sample '[oslo_messaging_notifications]' driver = noop '[oslo_messaging_rabbit]' rabbit_quorum_queue=true rabbit_transient_quorum_queue=true amqp_durable_queues=true '[keystone_authtoken]' memcached_servers=memcached-0.memcached.watcher-kuttl-default.svc:11212 tls_enabled=true memcache_tls_certfile = /etc/pki/tls/certs/mtls.crt memcache_tls_keyfile = /etc/pki/tls/private/mtls.key memcache_tls_cafile = /etc/pki/tls/certs/mtls-ca.crt memcache_tls_enabled = true project_domain_name = Default project_name = service user_domain_name = Default password = pa**********rd username = watcher auth_url = https://keystone-internal.watcher-kuttl-default.svc:5000 interface = internal auth_type = password cafile = /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem '[watcher_clients_auth]' project_domain_name = Default project_name = service user_domain_name = Default password = pa**********rd username = watcher auth_url = https://keystone-internal.watcher-kuttl-default.svc:5000 interface = internal auth_type = password cafile = /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem '[oslo_concurrency]' lock_path = /var/lib/watcher/tmp '[watcher_datasources]' datasources = prometheus '[cache]' backend = oslo_cache.memcache_pool memcache_servers=memcached-0.memcached.watcher-kuttl-default.svc:11212 memcache_socket_timeout = 0.5 memcache_pool_connection_get_timeout = 1 enabled=true tls_enabled=true tls_certfile=/etc/pki/tls/certs/mtls.crt tls_keyfile=/etc/pki/tls/private/mtls.key tls_cafile=/etc/pki/tls/certs/mtls-ca.crt memcache_dead_retry = 30 '[cinder_client]' endpoint_type = internal '[glance_client]' endpoint_type = internal '[ironic_client]' endpoint_type = internal '[keystone_client]' interface = internal '[neutron_client]' endpoint_type = internal '[nova_client]' endpoint_type = internal '[placement_client]' interface = internal '[watcher_cluster_data_model_collectors.compute]' period = 900 '[watcher_cluster_data_model_collectors.baremetal]' period = 900 '[watcher_cluster_data_model_collectors.storage]' period = 900\n logger.go:42: 07:01:49 | watcher-tls/2-patch-mtls | + '[' 1 == 1 ']'\n logger.go:42: 07:01:49 | watcher-tls/2-patch-mtls | ++ grep -c tls_keyfile=/etc/pki/tls/private/mtls.key\n \ logger.go:42: 07:01:49 | watcher-tls/2-patch-mtls | +++ oc rsh -c watcher-applier watcher-kuttl-applier-0 cat /etc/watcher/watcher.conf.d/00-default.conf\n logger.go:42: 07:01:50 | watcher-tls/2-patch-mtls | ++ echo '[DEFAULT]' state_path = /var/lib/watcher transport_url = 'rabbit://**********=1' control_exchange = watcher debug = True '#' empty notification_level means that no notification will be sent notification_level = '[database]' connection = 'mysql+pymysql://watcher_95d6:0107df271c530ebe460a742c5441b891@openstack.watcher-kuttl-default.svc/watcher?read_default_file=/etc/my.cnf' '[oslo_policy]' policy_file = /etc/watcher/policy.yaml.sample '[oslo_messaging_notifications]' driver = noop '[oslo_messaging_rabbit]' rabbit_quorum_queue=true rabbit_transient_quorum_queue=true amqp_durable_queues=true '[keystone_authtoken]' memcached_servers=memcached-0.memcached.watcher-kuttl-default.svc:11212 tls_enabled=true memcache_tls_certfile = /etc/pki/tls/certs/mtls.crt memcache_tls_keyfile = /etc/pki/tls/private/mtls.key memcache_tls_cafile = /etc/pki/tls/certs/mtls-ca.crt memcache_tls_enabled = true project_domain_name = Default project_name = service user_domain_name = Default password = pa**********rd username = watcher auth_url = https://keystone-internal.watcher-kuttl-default.svc:5000 interface = internal auth_type = password cafile = /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem '[watcher_clients_auth]' project_domain_name = Default project_name = service user_domain_name = Default password = pa**********rd username = watcher auth_url = https://keystone-internal.watcher-kuttl-default.svc:5000 interface = internal auth_type = password cafile = /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem '[oslo_concurrency]' lock_path = /var/lib/watcher/tmp '[watcher_datasources]' datasources = prometheus '[cache]' backend = oslo_cache.memcache_pool memcache_servers=memcached-0.memcached.watcher-kuttl-default.svc:11212 memcache_socket_timeout = 0.5 memcache_pool_connection_get_timeout = 1 enabled=true tls_enabled=true tls_certfile=/etc/pki/tls/certs/mtls.crt tls_keyfile=/etc/pki/tls/private/mtls.key tls_cafile=/etc/pki/tls/certs/mtls-ca.crt memcache_dead_retry = 30 '[cinder_client]' endpoint_type = internal '[glance_client]' endpoint_type = internal '[ironic_client]' endpoint_type = internal '[keystone_client]' interface = internal '[neutron_client]' endpoint_type = internal '[nova_client]' endpoint_type = internal '[placement_client]' interface = internal '[watcher_cluster_data_model_collectors.compute]' period = 900 '[watcher_cluster_data_model_collectors.baremetal]' period = 900 '[watcher_cluster_data_model_collectors.storage]' period = 900\n logger.go:42: 07:01:50 | watcher-tls/2-patch-mtls | + '[' 1 == 1 ']'\n logger.go:42: 07:01:50 | watcher-tls/2-patch-mtls | ++ grep -c tls_cafile=/etc/pki/tls/certs/mtls-ca.crt\n \ logger.go:42: 07:01:50 | watcher-tls/2-patch-mtls | +++ oc rsh -c watcher-applier watcher-kuttl-applier-0 cat /etc/watcher/watcher.conf.d/00-default.conf\n logger.go:42: 07:01:50 | watcher-tls/2-patch-mtls | ++ echo '[DEFAULT]' state_path = /var/lib/watcher transport_url = 'rabbit://**********=1' control_exchange = watcher debug = True '#' empty notification_level means that no notification will be sent notification_level = '[database]' connection = 'mysql+pymysql://watcher_95d6:0107df271c530ebe460a742c5441b891@openstack.watcher-kuttl-default.svc/watcher?read_default_file=/etc/my.cnf' '[oslo_policy]' policy_file = /etc/watcher/policy.yaml.sample '[oslo_messaging_notifications]' driver = noop '[oslo_messaging_rabbit]' rabbit_quorum_queue=true rabbit_transient_quorum_queue=true amqp_durable_queues=true '[keystone_authtoken]' memcached_servers=memcached-0.memcached.watcher-kuttl-default.svc:11212 tls_enabled=true memcache_tls_certfile = /etc/pki/tls/certs/mtls.crt memcache_tls_keyfile = /etc/pki/tls/private/mtls.key memcache_tls_cafile = /etc/pki/tls/certs/mtls-ca.crt memcache_tls_enabled = true project_domain_name = Default project_name = service user_domain_name = Default password = pa**********rd username = watcher auth_url = https://keystone-internal.watcher-kuttl-default.svc:5000 interface = internal auth_type = password cafile = /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem '[watcher_clients_auth]' project_domain_name = Default project_name = service user_domain_name = Default password = pa**********rd username = watcher auth_url = https://keystone-internal.watcher-kuttl-default.svc:5000 interface = internal auth_type = password cafile = /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem '[oslo_concurrency]' lock_path = /var/lib/watcher/tmp '[watcher_datasources]' datasources = prometheus '[cache]' backend = oslo_cache.memcache_pool memcache_servers=memcached-0.memcached.watcher-kuttl-default.svc:11212 memcache_socket_timeout = 0.5 memcache_pool_connection_get_timeout = 1 enabled=true tls_enabled=true tls_certfile=/etc/pki/tls/certs/mtls.crt tls_keyfile=/etc/pki/tls/private/mtls.key tls_cafile=/etc/pki/tls/certs/mtls-ca.crt memcache_dead_retry = 30 '[cinder_client]' endpoint_type = internal '[glance_client]' endpoint_type = internal '[ironic_client]' endpoint_type = internal '[keystone_client]' interface = internal '[neutron_client]' endpoint_type = internal '[nova_client]' endpoint_type = internal '[placement_client]' interface = internal '[watcher_cluster_data_model_collectors.compute]' period = 900 '[watcher_cluster_data_model_collectors.baremetal]' period = 900 '[watcher_cluster_data_model_collectors.storage]' period = 900\n logger.go:42: 07:01:50 | watcher-tls/2-patch-mtls | + '[' 1 == 1 ']'\n logger.go:42: 07:01:50 | watcher-tls/2-patch-mtls | + '[' -n watcher-kuttl-decision-engine-0 ']'\n logger.go:42: 07:01:50 | watcher-tls/2-patch-mtls | + echo 'Checking watcher-decision-engine config...'\n logger.go:42: 07:01:50 | watcher-tls/2-patch-mtls | Checking watcher-decision-engine config...\n logger.go:42: 07:01:50 | watcher-tls/2-patch-mtls | ++ grep -c 'memcache_tls_certfile = /etc/pki/tls/certs/mtls.crt '\n logger.go:42: 07:01:50 | watcher-tls/2-patch-mtls | +++ oc rsh -c watcher-decision-engine watcher-kuttl-decision-engine-0 cat /etc/watcher/watcher.conf.d/00-default.conf\n \ logger.go:42: 07:01:50 | watcher-tls/2-patch-mtls | error: unable to upgrade connection: container not found (\"watcher-decision-engine\")\n logger.go:42: 07:01:50 | watcher-tls/2-patch-mtls | ++ echo\n logger.go:42: 07:01:50 | watcher-tls/2-patch-mtls | + '[' 0 == 1 ']'\n logger.go:42: 07:01:51 | watcher-tls/2-patch-mtls | running command: [sh -c set -euxo pipefail\n \n oc project ${NAMESPACE}\n \ # Get pod names for each watcher service\n APIPOD=$(oc get pods -l service=watcher-api -o jsonpath='{.items[0].metadata.name}')\n APPLIERPOD=$(oc get pods -l service=watcher-applier -o jsonpath='{.items[0].metadata.name}')\n DECISIONENGINEPOD=$(oc get pods -l service=watcher-decision-engine -o jsonpath='{.items[0].metadata.name}')\n \ \n # Verify memcached mTLS config parameters in watcher-api config\n \ if [ -n \"${APIPOD}\" ]; then\n echo \"Checking watcher-api config...\"\n \ [ $(echo $(oc rsh -c watcher-api $APIPOD cat /etc/watcher/watcher.conf.d/00-default.conf) | grep -c \"memcache_tls_certfile = /etc/pki/tls/certs/mtls.crt \") == 1 ]\n [ $(echo $(oc rsh -c watcher-api $APIPOD cat /etc/watcher/watcher.conf.d/00-default.conf) | grep -c \"memcache_tls_keyfile = /etc/pki/tls/private/mtls.key\") == 1 ]\n [ $(echo $(oc rsh -c watcher-api $APIPOD cat /etc/watcher/watcher.conf.d/00-default.conf) | grep -c \"memcache_tls_cafile = /etc/pki/tls/certs/mtls-ca.crt\") == 1 ]\n [ $(echo $(oc rsh -c watcher-api $APIPOD cat /etc/watcher/watcher.conf.d/00-default.conf) | grep -c \"memcache_tls_enabled = true\") == 1 ]\n \n # Verify mTLS config parameters in memcached backend config\n [ $(echo $(oc rsh -c watcher-api $APIPOD cat /etc/watcher/watcher.conf.d/00-default.conf) | grep -c \"tls_certfile=/etc/pki/tls/certs/mtls.crt\") == 1 ]\n [ $(echo $(oc rsh -c watcher-api $APIPOD cat /etc/watcher/watcher.conf.d/00-default.conf) | grep -c \"tls_keyfile=/etc/pki/tls/private/mtls.key\") == 1 ]\n [ $(echo $(oc rsh -c watcher-api $APIPOD cat /etc/watcher/watcher.conf.d/00-default.conf) | grep -c \"tls_cafile=/etc/pki/tls/certs/mtls-ca.crt\") == 1 ]\n else\n exit 1\n fi\n \n # Verify memcached mTLS config parameters in watcher-applier config\n if [ -n \"${APPLIERPOD}\" ]; then\n echo \"Checking watcher-applier config...\"\n [ $(echo $(oc rsh -c watcher-applier $APPLIERPOD cat /etc/watcher/watcher.conf.d/00-default.conf) | grep -c \"memcache_tls_certfile = /etc/pki/tls/certs/mtls.crt \") == 1 ]\n [ $(echo $(oc rsh -c watcher-applier $APPLIERPOD cat /etc/watcher/watcher.conf.d/00-default.conf) | grep -c \"memcache_tls_keyfile = /etc/pki/tls/private/mtls.key\") == 1 ]\n [ $(echo $(oc rsh -c watcher-applier $APPLIERPOD cat /etc/watcher/watcher.conf.d/00-default.conf) | grep -c \"memcache_tls_cafile = /etc/pki/tls/certs/mtls-ca.crt\") == 1 ]\n [ $(echo $(oc rsh -c watcher-applier $APPLIERPOD cat /etc/watcher/watcher.conf.d/00-default.conf) | grep -c \"memcache_tls_enabled = true\") == 1 ]\n \n [ $(echo $(oc rsh -c watcher-applier $APPLIERPOD cat /etc/watcher/watcher.conf.d/00-default.conf) | grep -c \"tls_certfile=/etc/pki/tls/certs/mtls.crt\") == 1 ]\n [ $(echo $(oc rsh -c watcher-applier $APPLIERPOD cat /etc/watcher/watcher.conf.d/00-default.conf) | grep -c \"tls_keyfile=/etc/pki/tls/private/mtls.key\") == 1 ]\n [ $(echo $(oc rsh -c watcher-applier $APPLIERPOD cat /etc/watcher/watcher.conf.d/00-default.conf) | grep -c \"tls_cafile=/etc/pki/tls/certs/mtls-ca.crt\") == 1 ]\n else\n \ exit 1\n fi\n \n # Verify memcached mTLS config parameters in watcher-decision-engine config\n if [ -n \"${DECISIONENGINEPOD}\" ]; then\n \ echo \"Checking watcher-decision-engine config...\"\n [ $(echo $(oc rsh -c watcher-decision-engine $DECISIONENGINEPOD cat /etc/watcher/watcher.conf.d/00-default.conf) | grep -c \"memcache_tls_certfile = /etc/pki/tls/certs/mtls.crt \") == 1 ]\n [ $(echo $(oc rsh -c watcher-decision-engine $DECISIONENGINEPOD cat /etc/watcher/watcher.conf.d/00-default.conf) | grep -c \"memcache_tls_keyfile = /etc/pki/tls/private/mtls.key\") == 1 ]\n [ $(echo $(oc rsh -c watcher-decision-engine $DECISIONENGINEPOD cat /etc/watcher/watcher.conf.d/00-default.conf) | grep -c \"memcache_tls_cafile = /etc/pki/tls/certs/mtls-ca.crt\") == 1 ]\n [ $(echo $(oc rsh -c watcher-decision-engine $DECISIONENGINEPOD cat /etc/watcher/watcher.conf.d/00-default.conf) | grep -c \"memcache_tls_enabled = true\") == 1 ]\n \n [ $(echo $(oc rsh -c watcher-decision-engine $DECISIONENGINEPOD cat /etc/watcher/watcher.conf.d/00-default.conf) | grep -c \"tls_certfile=/etc/pki/tls/certs/mtls.crt\") == 1 ]\n [ $(echo $(oc rsh -c watcher-decision-engine $DECISIONENGINEPOD cat /etc/watcher/watcher.conf.d/00-default.conf) | grep -c \"tls_keyfile=/etc/pki/tls/private/mtls.key\") == 1 ]\n [ $(echo $(oc rsh -c watcher-decision-engine $DECISIONENGINEPOD cat /etc/watcher/watcher.conf.d/00-default.conf) | grep -c \"tls_cafile=/etc/pki/tls/certs/mtls-ca.crt\") == 1 ]\n else\n \ exit 1\n fi\n ]\n logger.go:42: 07:01:51 | watcher-tls/2-patch-mtls | + oc project watcher-kuttl-default\n logger.go:42: 07:01:51 | watcher-tls/2-patch-mtls | Already on project \"watcher-kuttl-default\" on server \"https://api.crc.testing:6443\".\n \ logger.go:42: 07:01:51 | watcher-tls/2-patch-mtls | ++ oc get pods -l service=watcher-api -o 'jsonpath={.items[0].metadata.name}'\n logger.go:42: 07:01:52 | watcher-tls/2-patch-mtls | + APIPOD=watcher-kuttl-api-0\n logger.go:42: 07:01:52 | watcher-tls/2-patch-mtls | ++ oc get pods -l service=watcher-applier -o 'jsonpath={.items[0].metadata.name}'\n \ logger.go:42: 07:01:52 | watcher-tls/2-patch-mtls | + APPLIERPOD=watcher-kuttl-applier-0\n \ logger.go:42: 07:01:52 | watcher-tls/2-patch-mtls | ++ oc get pods -l service=watcher-decision-engine -o 'jsonpath={.items[0].metadata.name}'\n logger.go:42: 07:01:52 | watcher-tls/2-patch-mtls | + DECISIONENGINEPOD=watcher-kuttl-decision-engine-0\n logger.go:42: 07:01:52 | watcher-tls/2-patch-mtls | + '[' -n watcher-kuttl-api-0 ']'\n logger.go:42: 07:01:52 | watcher-tls/2-patch-mtls | + echo 'Checking watcher-api config...'\n \ logger.go:42: 07:01:52 | watcher-tls/2-patch-mtls | Checking watcher-api config...\n \ logger.go:42: 07:01:52 | watcher-tls/2-patch-mtls | ++ grep -c 'memcache_tls_certfile = /etc/pki/tls/certs/mtls.crt '\n logger.go:42: 07:01:52 | watcher-tls/2-patch-mtls | +++ oc rsh -c watcher-api watcher-kuttl-api-0 cat /etc/watcher/watcher.conf.d/00-default.conf\n \ logger.go:42: 07:01:52 | watcher-tls/2-patch-mtls | ++ echo '[DEFAULT]' state_path = /var/lib/watcher transport_url = 'rabbit://**********=1' control_exchange = watcher debug = True log_file = /var/log/watcher/watcher-kuttl-api.log '#' empty notification_level means that no notification will be sent notification_level = '[database]' connection = 'mysql+pymysql://watcher_95d6:0107df271c530ebe460a742c5441b891@openstack.watcher-kuttl-default.svc/watcher?read_default_file=/etc/my.cnf' '[oslo_policy]' policy_file = /etc/watcher/policy.yaml.sample '[oslo_messaging_notifications]' driver = noop '[oslo_messaging_rabbit]' rabbit_quorum_queue=true rabbit_transient_quorum_queue=true amqp_durable_queues=true '[keystone_authtoken]' memcached_servers=memcached-0.memcached.watcher-kuttl-default.svc:11212 tls_enabled=true memcache_tls_certfile = /etc/pki/tls/certs/mtls.crt memcache_tls_keyfile = /etc/pki/tls/private/mtls.key memcache_tls_cafile = /etc/pki/tls/certs/mtls-ca.crt memcache_tls_enabled = true project_domain_name = Default project_name = service user_domain_name = Default password = pa**********rd username = watcher auth_url = https://keystone-internal.watcher-kuttl-default.svc:5000 interface = internal auth_type = password cafile = /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem '[watcher_clients_auth]' project_domain_name = Default project_name = service user_domain_name = Default password = pa**********rd username = watcher auth_url = https://keystone-internal.watcher-kuttl-default.svc:5000 interface = internal auth_type = password cafile = /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem '[oslo_concurrency]' lock_path = /var/lib/watcher/tmp '[watcher_datasources]' datasources = prometheus '[cache]' backend = oslo_cache.memcache_pool memcache_servers=memcached-0.memcached.watcher-kuttl-default.svc:11212 memcache_socket_timeout = 0.5 memcache_pool_connection_get_timeout = 1 enabled=true tls_enabled=true tls_certfile=/etc/pki/tls/certs/mtls.crt tls_keyfile=/etc/pki/tls/private/mtls.key tls_cafile=/etc/pki/tls/certs/mtls-ca.crt memcache_dead_retry = 30 '[prometheus_client]' host = metric-storage-prometheus.watcher-kuttl-default.svc port = 9090 cafile = /etc/pki/ca-trust/extracted/pem/prometheus/ca.crt '[cinder_client]' endpoint_type = internal '[glance_client]' endpoint_type = internal '[ironic_client]' endpoint_type = internal '[keystone_client]' interface = internal '[neutron_client]' endpoint_type = internal '[nova_client]' endpoint_type = internal '[placement_client]' interface = internal '[watcher_cluster_data_model_collectors.compute]' period = 900 '[watcher_cluster_data_model_collectors.baremetal]' period = 900 '[watcher_cluster_data_model_collectors.storage]' period = 900\n logger.go:42: 07:01:52 | watcher-tls/2-patch-mtls | + '[' 1 == 1 ']'\n logger.go:42: 07:01:52 | watcher-tls/2-patch-mtls | ++ grep -c 'memcache_tls_keyfile = /etc/pki/tls/private/mtls.key'\n \ logger.go:42: 07:01:52 | watcher-tls/2-patch-mtls | +++ oc rsh -c watcher-api watcher-kuttl-api-0 cat /etc/watcher/watcher.conf.d/00-default.conf\n logger.go:42: 07:01:52 | watcher-tls/2-patch-mtls | ++ echo '[DEFAULT]' state_path = /var/lib/watcher transport_url = 'rabbit://**********=1' control_exchange = watcher debug = True log_file = /var/log/watcher/watcher-kuttl-api.log '#' empty notification_level means that no notification will be sent notification_level = '[database]' connection = 'mysql+pymysql://watcher_95d6:0107df271c530ebe460a742c5441b891@openstack.watcher-kuttl-default.svc/watcher?read_default_file=/etc/my.cnf' '[oslo_policy]' policy_file = /etc/watcher/policy.yaml.sample '[oslo_messaging_notifications]' driver = noop '[oslo_messaging_rabbit]' rabbit_quorum_queue=true rabbit_transient_quorum_queue=true amqp_durable_queues=true '[keystone_authtoken]' memcached_servers=memcached-0.memcached.watcher-kuttl-default.svc:11212 tls_enabled=true memcache_tls_certfile = /etc/pki/tls/certs/mtls.crt memcache_tls_keyfile = /etc/pki/tls/private/mtls.key memcache_tls_cafile = /etc/pki/tls/certs/mtls-ca.crt memcache_tls_enabled = true project_domain_name = Default project_name = service user_domain_name = Default password = pa**********rd username = watcher auth_url = https://keystone-internal.watcher-kuttl-default.svc:5000 interface = internal auth_type = password cafile = /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem '[watcher_clients_auth]' project_domain_name = Default project_name = service user_domain_name = Default password = pa**********rd username = watcher auth_url = https://keystone-internal.watcher-kuttl-default.svc:5000 interface = internal auth_type = password cafile = /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem '[oslo_concurrency]' lock_path = /var/lib/watcher/tmp '[watcher_datasources]' datasources = prometheus '[cache]' backend = oslo_cache.memcache_pool memcache_servers=memcached-0.memcached.watcher-kuttl-default.svc:11212 memcache_socket_timeout = 0.5 memcache_pool_connection_get_timeout = 1 enabled=true tls_enabled=true tls_certfile=/etc/pki/tls/certs/mtls.crt tls_keyfile=/etc/pki/tls/private/mtls.key tls_cafile=/etc/pki/tls/certs/mtls-ca.crt memcache_dead_retry = 30 '[prometheus_client]' host = metric-storage-prometheus.watcher-kuttl-default.svc port = 9090 cafile = /etc/pki/ca-trust/extracted/pem/prometheus/ca.crt '[cinder_client]' endpoint_type = internal '[glance_client]' endpoint_type = internal '[ironic_client]' endpoint_type = internal '[keystone_client]' interface = internal '[neutron_client]' endpoint_type = internal '[nova_client]' endpoint_type = internal '[placement_client]' interface = internal '[watcher_cluster_data_model_collectors.compute]' period = 900 '[watcher_cluster_data_model_collectors.baremetal]' period = 900 '[watcher_cluster_data_model_collectors.storage]' period = 900\n logger.go:42: 07:01:53 | watcher-tls/2-patch-mtls | + '[' 1 == 1 ']'\n logger.go:42: 07:01:53 | watcher-tls/2-patch-mtls | ++ grep -c 'memcache_tls_cafile = /etc/pki/tls/certs/mtls-ca.crt'\n \ logger.go:42: 07:01:53 | watcher-tls/2-patch-mtls | +++ oc rsh -c watcher-api watcher-kuttl-api-0 cat /etc/watcher/watcher.conf.d/00-default.conf\n logger.go:42: 07:01:53 | watcher-tls/2-patch-mtls | ++ echo '[DEFAULT]' state_path = /var/lib/watcher transport_url = 'rabbit://**********=1' control_exchange = watcher debug = True log_file = /var/log/watcher/watcher-kuttl-api.log '#' empty notification_level means that no notification will be sent notification_level = '[database]' connection = 'mysql+pymysql://watcher_95d6:0107df271c530ebe460a742c5441b891@openstack.watcher-kuttl-default.svc/watcher?read_default_file=/etc/my.cnf' '[oslo_policy]' policy_file = /etc/watcher/policy.yaml.sample '[oslo_messaging_notifications]' driver = noop '[oslo_messaging_rabbit]' rabbit_quorum_queue=true rabbit_transient_quorum_queue=true amqp_durable_queues=true '[keystone_authtoken]' memcached_servers=memcached-0.memcached.watcher-kuttl-default.svc:11212 tls_enabled=true memcache_tls_certfile = /etc/pki/tls/certs/mtls.crt memcache_tls_keyfile = /etc/pki/tls/private/mtls.key memcache_tls_cafile = /etc/pki/tls/certs/mtls-ca.crt memcache_tls_enabled = true project_domain_name = Default project_name = service user_domain_name = Default password = pa**********rd username = watcher auth_url = https://keystone-internal.watcher-kuttl-default.svc:5000 interface = internal auth_type = password cafile = /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem '[watcher_clients_auth]' project_domain_name = Default project_name = service user_domain_name = Default password = pa**********rd username = watcher auth_url = https://keystone-internal.watcher-kuttl-default.svc:5000 interface = internal auth_type = password cafile = /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem '[oslo_concurrency]' lock_path = /var/lib/watcher/tmp '[watcher_datasources]' datasources = prometheus '[cache]' backend = oslo_cache.memcache_pool memcache_servers=memcached-0.memcached.watcher-kuttl-default.svc:11212 memcache_socket_timeout = 0.5 memcache_pool_connection_get_timeout = 1 enabled=true tls_enabled=true tls_certfile=/etc/pki/tls/certs/mtls.crt tls_keyfile=/etc/pki/tls/private/mtls.key tls_cafile=/etc/pki/tls/certs/mtls-ca.crt memcache_dead_retry = 30 '[prometheus_client]' host = metric-storage-prometheus.watcher-kuttl-default.svc port = 9090 cafile = /etc/pki/ca-trust/extracted/pem/prometheus/ca.crt '[cinder_client]' endpoint_type = internal '[glance_client]' endpoint_type = internal '[ironic_client]' endpoint_type = internal '[keystone_client]' interface = internal '[neutron_client]' endpoint_type = internal '[nova_client]' endpoint_type = internal '[placement_client]' interface = internal '[watcher_cluster_data_model_collectors.compute]' period = 900 '[watcher_cluster_data_model_collectors.baremetal]' period = 900 '[watcher_cluster_data_model_collectors.storage]' period = 900\n logger.go:42: 07:01:53 | watcher-tls/2-patch-mtls | + '[' 1 == 1 ']'\n logger.go:42: 07:01:53 | watcher-tls/2-patch-mtls | ++ grep -c 'memcache_tls_enabled = true'\n logger.go:42: 07:01:53 | watcher-tls/2-patch-mtls | +++ oc rsh -c watcher-api watcher-kuttl-api-0 cat /etc/watcher/watcher.conf.d/00-default.conf\n logger.go:42: 07:01:53 | watcher-tls/2-patch-mtls | ++ echo '[DEFAULT]' state_path = /var/lib/watcher transport_url = 'rabbit://**********=1' control_exchange = watcher debug = True log_file = /var/log/watcher/watcher-kuttl-api.log '#' empty notification_level means that no notification will be sent notification_level = '[database]' connection = 'mysql+pymysql://watcher_95d6:0107df271c530ebe460a742c5441b891@openstack.watcher-kuttl-default.svc/watcher?read_default_file=/etc/my.cnf' '[oslo_policy]' policy_file = /etc/watcher/policy.yaml.sample '[oslo_messaging_notifications]' driver = noop '[oslo_messaging_rabbit]' rabbit_quorum_queue=true rabbit_transient_quorum_queue=true amqp_durable_queues=true '[keystone_authtoken]' memcached_servers=memcached-0.memcached.watcher-kuttl-default.svc:11212 tls_enabled=true memcache_tls_certfile = /etc/pki/tls/certs/mtls.crt memcache_tls_keyfile = /etc/pki/tls/private/mtls.key memcache_tls_cafile = /etc/pki/tls/certs/mtls-ca.crt memcache_tls_enabled = true project_domain_name = Default project_name = service user_domain_name = Default password = pa**********rd username = watcher auth_url = https://keystone-internal.watcher-kuttl-default.svc:5000 interface = internal auth_type = password cafile = /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem '[watcher_clients_auth]' project_domain_name = Default project_name = service user_domain_name = Default password = pa**********rd username = watcher auth_url = https://keystone-internal.watcher-kuttl-default.svc:5000 interface = internal auth_type = password cafile = /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem '[oslo_concurrency]' lock_path = /var/lib/watcher/tmp '[watcher_datasources]' datasources = prometheus '[cache]' backend = oslo_cache.memcache_pool memcache_servers=memcached-0.memcached.watcher-kuttl-default.svc:11212 memcache_socket_timeout = 0.5 memcache_pool_connection_get_timeout = 1 enabled=true tls_enabled=true tls_certfile=/etc/pki/tls/certs/mtls.crt tls_keyfile=/etc/pki/tls/private/mtls.key tls_cafile=/etc/pki/tls/certs/mtls-ca.crt memcache_dead_retry = 30 '[prometheus_client]' host = metric-storage-prometheus.watcher-kuttl-default.svc port = 9090 cafile = /etc/pki/ca-trust/extracted/pem/prometheus/ca.crt '[cinder_client]' endpoint_type = internal '[glance_client]' endpoint_type = internal '[ironic_client]' endpoint_type = internal '[keystone_client]' interface = internal '[neutron_client]' endpoint_type = internal '[nova_client]' endpoint_type = internal '[placement_client]' interface = internal '[watcher_cluster_data_model_collectors.compute]' period = 900 '[watcher_cluster_data_model_collectors.baremetal]' period = 900 '[watcher_cluster_data_model_collectors.storage]' period = 900\n logger.go:42: 07:01:53 | watcher-tls/2-patch-mtls | + '[' 1 == 1 ']'\n logger.go:42: 07:01:53 | watcher-tls/2-patch-mtls | ++ grep -c tls_certfile=/etc/pki/tls/certs/mtls.crt\n \ logger.go:42: 07:01:53 | watcher-tls/2-patch-mtls | +++ oc rsh -c watcher-api watcher-kuttl-api-0 cat /etc/watcher/watcher.conf.d/00-default.conf\n logger.go:42: 07:01:53 | watcher-tls/2-patch-mtls | ++ echo '[DEFAULT]' state_path = /var/lib/watcher transport_url = 'rabbit://**********=1' control_exchange = watcher debug = True log_file = /var/log/watcher/watcher-kuttl-api.log '#' empty notification_level means that no notification will be sent notification_level = '[database]' connection = 'mysql+pymysql://watcher_95d6:0107df271c530ebe460a742c5441b891@openstack.watcher-kuttl-default.svc/watcher?read_default_file=/etc/my.cnf' '[oslo_policy]' policy_file = /etc/watcher/policy.yaml.sample '[oslo_messaging_notifications]' driver = noop '[oslo_messaging_rabbit]' rabbit_quorum_queue=true rabbit_transient_quorum_queue=true amqp_durable_queues=true '[keystone_authtoken]' memcached_servers=memcached-0.memcached.watcher-kuttl-default.svc:11212 tls_enabled=true memcache_tls_certfile = /etc/pki/tls/certs/mtls.crt memcache_tls_keyfile = /etc/pki/tls/private/mtls.key memcache_tls_cafile = /etc/pki/tls/certs/mtls-ca.crt memcache_tls_enabled = true project_domain_name = Default project_name = service user_domain_name = Default password = pa**********rd username = watcher auth_url = https://keystone-internal.watcher-kuttl-default.svc:5000 interface = internal auth_type = password cafile = /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem '[watcher_clients_auth]' project_domain_name = Default project_name = service user_domain_name = Default password = pa**********rd username = watcher auth_url = https://keystone-internal.watcher-kuttl-default.svc:5000 interface = internal auth_type = password cafile = /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem '[oslo_concurrency]' lock_path = /var/lib/watcher/tmp '[watcher_datasources]' datasources = prometheus '[cache]' backend = oslo_cache.memcache_pool memcache_servers=memcached-0.memcached.watcher-kuttl-default.svc:11212 memcache_socket_timeout = 0.5 memcache_pool_connection_get_timeout = 1 enabled=true tls_enabled=true tls_certfile=/etc/pki/tls/certs/mtls.crt tls_keyfile=/etc/pki/tls/private/mtls.key tls_cafile=/etc/pki/tls/certs/mtls-ca.crt memcache_dead_retry = 30 '[prometheus_client]' host = metric-storage-prometheus.watcher-kuttl-default.svc port = 9090 cafile = /etc/pki/ca-trust/extracted/pem/prometheus/ca.crt '[cinder_client]' endpoint_type = internal '[glance_client]' endpoint_type = internal '[ironic_client]' endpoint_type = internal '[keystone_client]' interface = internal '[neutron_client]' endpoint_type = internal '[nova_client]' endpoint_type = internal '[placement_client]' interface = internal '[watcher_cluster_data_model_collectors.compute]' period = 900 '[watcher_cluster_data_model_collectors.baremetal]' period = 900 '[watcher_cluster_data_model_collectors.storage]' period = 900\n logger.go:42: 07:01:53 | watcher-tls/2-patch-mtls | + '[' 1 == 1 ']'\n logger.go:42: 07:01:53 | watcher-tls/2-patch-mtls | ++ grep -c tls_keyfile=/etc/pki/tls/private/mtls.key\n \ logger.go:42: 07:01:53 | watcher-tls/2-patch-mtls | +++ oc rsh -c watcher-api watcher-kuttl-api-0 cat /etc/watcher/watcher.conf.d/00-default.conf\n logger.go:42: 07:01:53 | watcher-tls/2-patch-mtls | ++ echo '[DEFAULT]' state_path = /var/lib/watcher transport_url = 'rabbit://**********=1' control_exchange = watcher debug = True log_file = /var/log/watcher/watcher-kuttl-api.log '#' empty notification_level means that no notification will be sent notification_level = '[database]' connection = 'mysql+pymysql://watcher_95d6:0107df271c530ebe460a742c5441b891@openstack.watcher-kuttl-default.svc/watcher?read_default_file=/etc/my.cnf' '[oslo_policy]' policy_file = /etc/watcher/policy.yaml.sample '[oslo_messaging_notifications]' driver = noop '[oslo_messaging_rabbit]' rabbit_quorum_queue=true rabbit_transient_quorum_queue=true amqp_durable_queues=true '[keystone_authtoken]' memcached_servers=memcached-0.memcached.watcher-kuttl-default.svc:11212 tls_enabled=true memcache_tls_certfile = /etc/pki/tls/certs/mtls.crt memcache_tls_keyfile = /etc/pki/tls/private/mtls.key memcache_tls_cafile = /etc/pki/tls/certs/mtls-ca.crt memcache_tls_enabled = true project_domain_name = Default project_name = service user_domain_name = Default password = pa**********rd username = watcher auth_url = https://keystone-internal.watcher-kuttl-default.svc:5000 interface = internal auth_type = password cafile = /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem '[watcher_clients_auth]' project_domain_name = Default project_name = service user_domain_name = Default password = pa**********rd username = watcher auth_url = https://keystone-internal.watcher-kuttl-default.svc:5000 interface = internal auth_type = password cafile = /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem '[oslo_concurrency]' lock_path = /var/lib/watcher/tmp '[watcher_datasources]' datasources = prometheus '[cache]' backend = oslo_cache.memcache_pool memcache_servers=memcached-0.memcached.watcher-kuttl-default.svc:11212 memcache_socket_timeout = 0.5 memcache_pool_connection_get_timeout = 1 enabled=true tls_enabled=true tls_certfile=/etc/pki/tls/certs/mtls.crt tls_keyfile=/etc/pki/tls/private/mtls.key tls_cafile=/etc/pki/tls/certs/mtls-ca.crt memcache_dead_retry = 30 '[prometheus_client]' host = metric-storage-prometheus.watcher-kuttl-default.svc port = 9090 cafile = /etc/pki/ca-trust/extracted/pem/prometheus/ca.crt '[cinder_client]' endpoint_type = internal '[glance_client]' endpoint_type = internal '[ironic_client]' endpoint_type = internal '[keystone_client]' interface = internal '[neutron_client]' endpoint_type = internal '[nova_client]' endpoint_type = internal '[placement_client]' interface = internal '[watcher_cluster_data_model_collectors.compute]' period = 900 '[watcher_cluster_data_model_collectors.baremetal]' period = 900 '[watcher_cluster_data_model_collectors.storage]' period = 900\n logger.go:42: 07:01:53 | watcher-tls/2-patch-mtls | + '[' 1 == 1 ']'\n logger.go:42: 07:01:53 | watcher-tls/2-patch-mtls | +++ oc rsh -c watcher-api watcher-kuttl-api-0 cat /etc/watcher/watcher.conf.d/00-default.conf\n \ logger.go:42: 07:01:53 | watcher-tls/2-patch-mtls | ++ grep -c tls_cafile=/etc/pki/tls/certs/mtls-ca.crt\n \ logger.go:42: 07:01:54 | watcher-tls/2-patch-mtls | ++ echo '[DEFAULT]' state_path = /var/lib/watcher transport_url = 'rabbit://**********=1' control_exchange = watcher debug = True log_file = /var/log/watcher/watcher-kuttl-api.log '#' empty notification_level means that no notification will be sent notification_level = '[database]' connection = 'mysql+pymysql://watcher_95d6:0107df271c530ebe460a742c5441b891@openstack.watcher-kuttl-default.svc/watcher?read_default_file=/etc/my.cnf' '[oslo_policy]' policy_file = /etc/watcher/policy.yaml.sample '[oslo_messaging_notifications]' driver = noop '[oslo_messaging_rabbit]' rabbit_quorum_queue=true rabbit_transient_quorum_queue=true amqp_durable_queues=true '[keystone_authtoken]' memcached_servers=memcached-0.memcached.watcher-kuttl-default.svc:11212 tls_enabled=true memcache_tls_certfile = /etc/pki/tls/certs/mtls.crt memcache_tls_keyfile = /etc/pki/tls/private/mtls.key memcache_tls_cafile = /etc/pki/tls/certs/mtls-ca.crt memcache_tls_enabled = true project_domain_name = Default project_name = service user_domain_name = Default password = pa**********rd username = watcher auth_url = https://keystone-internal.watcher-kuttl-default.svc:5000 interface = internal auth_type = password cafile = /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem '[watcher_clients_auth]' project_domain_name = Default project_name = service user_domain_name = Default password = pa**********rd username = watcher auth_url = https://keystone-internal.watcher-kuttl-default.svc:5000 interface = internal auth_type = password cafile = /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem '[oslo_concurrency]' lock_path = /var/lib/watcher/tmp '[watcher_datasources]' datasources = prometheus '[cache]' backend = oslo_cache.memcache_pool memcache_servers=memcached-0.memcached.watcher-kuttl-default.svc:11212 memcache_socket_timeout = 0.5 memcache_pool_connection_get_timeout = 1 enabled=true tls_enabled=true tls_certfile=/etc/pki/tls/certs/mtls.crt tls_keyfile=/etc/pki/tls/private/mtls.key tls_cafile=/etc/pki/tls/certs/mtls-ca.crt memcache_dead_retry = 30 '[prometheus_client]' host = metric-storage-prometheus.watcher-kuttl-default.svc port = 9090 cafile = /etc/pki/ca-trust/extracted/pem/prometheus/ca.crt '[cinder_client]' endpoint_type = internal '[glance_client]' endpoint_type = internal '[ironic_client]' endpoint_type = internal '[keystone_client]' interface = internal '[neutron_client]' endpoint_type = internal '[nova_client]' endpoint_type = internal '[placement_client]' interface = internal '[watcher_cluster_data_model_collectors.compute]' period = 900 '[watcher_cluster_data_model_collectors.baremetal]' period = 900 '[watcher_cluster_data_model_collectors.storage]' period = 900\n logger.go:42: 07:01:54 | watcher-tls/2-patch-mtls | + '[' 1 == 1 ']'\n logger.go:42: 07:01:54 | watcher-tls/2-patch-mtls | + '[' -n watcher-kuttl-applier-0 ']'\n logger.go:42: 07:01:54 | watcher-tls/2-patch-mtls | + echo 'Checking watcher-applier config...'\n \ logger.go:42: 07:01:54 | watcher-tls/2-patch-mtls | Checking watcher-applier config...\n logger.go:42: 07:01:54 | watcher-tls/2-patch-mtls | ++ grep -c 'memcache_tls_certfile = /etc/pki/tls/certs/mtls.crt '\n logger.go:42: 07:01:54 | watcher-tls/2-patch-mtls | +++ oc rsh -c watcher-applier watcher-kuttl-applier-0 cat /etc/watcher/watcher.conf.d/00-default.conf\n \ logger.go:42: 07:01:54 | watcher-tls/2-patch-mtls | ++ echo '[DEFAULT]' state_path = /var/lib/watcher transport_url = 'rabbit://**********=1' control_exchange = watcher debug = True '#' empty notification_level means that no notification will be sent notification_level = '[database]' connection = 'mysql+pymysql://watcher_95d6:0107df271c530ebe460a742c5441b891@openstack.watcher-kuttl-default.svc/watcher?read_default_file=/etc/my.cnf' '[oslo_policy]' policy_file = /etc/watcher/policy.yaml.sample '[oslo_messaging_notifications]' driver = noop '[oslo_messaging_rabbit]' rabbit_quorum_queue=true rabbit_transient_quorum_queue=true amqp_durable_queues=true '[keystone_authtoken]' memcached_servers=memcached-0.memcached.watcher-kuttl-default.svc:11212 tls_enabled=true memcache_tls_certfile = /etc/pki/tls/certs/mtls.crt memcache_tls_keyfile = /etc/pki/tls/private/mtls.key memcache_tls_cafile = /etc/pki/tls/certs/mtls-ca.crt memcache_tls_enabled = true project_domain_name = Default project_name = service user_domain_name = Default password = pa**********rd username = watcher auth_url = https://keystone-internal.watcher-kuttl-default.svc:5000 interface = internal auth_type = password cafile = /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem '[watcher_clients_auth]' project_domain_name = Default project_name = service user_domain_name = Default password = pa**********rd username = watcher auth_url = https://keystone-internal.watcher-kuttl-default.svc:5000 interface = internal auth_type = password cafile = /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem '[oslo_concurrency]' lock_path = /var/lib/watcher/tmp '[watcher_datasources]' datasources = prometheus '[cache]' backend = oslo_cache.memcache_pool memcache_servers=memcached-0.memcached.watcher-kuttl-default.svc:11212 memcache_socket_timeout = 0.5 memcache_pool_connection_get_timeout = 1 enabled=true tls_enabled=true tls_certfile=/etc/pki/tls/certs/mtls.crt tls_keyfile=/etc/pki/tls/private/mtls.key tls_cafile=/etc/pki/tls/certs/mtls-ca.crt memcache_dead_retry = 30 '[cinder_client]' endpoint_type = internal '[glance_client]' endpoint_type = internal '[ironic_client]' endpoint_type = internal '[keystone_client]' interface = internal '[neutron_client]' endpoint_type = internal '[nova_client]' endpoint_type = internal '[placement_client]' interface = internal '[watcher_cluster_data_model_collectors.compute]' period = 900 '[watcher_cluster_data_model_collectors.baremetal]' period = 900 '[watcher_cluster_data_model_collectors.storage]' period = 900\n logger.go:42: 07:01:54 | watcher-tls/2-patch-mtls | + '[' 1 == 1 ']'\n logger.go:42: 07:01:54 | watcher-tls/2-patch-mtls | ++ grep -c 'memcache_tls_keyfile = /etc/pki/tls/private/mtls.key'\n logger.go:42: 07:01:54 | watcher-tls/2-patch-mtls | +++ oc rsh -c watcher-applier watcher-kuttl-applier-0 cat /etc/watcher/watcher.conf.d/00-default.conf\n \ logger.go:42: 07:01:54 | watcher-tls/2-patch-mtls | ++ echo '[DEFAULT]' state_path = /var/lib/watcher transport_url = 'rabbit://**********=1' control_exchange = watcher debug = True '#' empty notification_level means that no notification will be sent notification_level = '[database]' connection = 'mysql+pymysql://watcher_95d6:0107df271c530ebe460a742c5441b891@openstack.watcher-kuttl-default.svc/watcher?read_default_file=/etc/my.cnf' '[oslo_policy]' policy_file = /etc/watcher/policy.yaml.sample '[oslo_messaging_notifications]' driver = noop '[oslo_messaging_rabbit]' rabbit_quorum_queue=true rabbit_transient_quorum_queue=true amqp_durable_queues=true '[keystone_authtoken]' memcached_servers=memcached-0.memcached.watcher-kuttl-default.svc:11212 tls_enabled=true memcache_tls_certfile = /etc/pki/tls/certs/mtls.crt memcache_tls_keyfile = /etc/pki/tls/private/mtls.key memcache_tls_cafile = /etc/pki/tls/certs/mtls-ca.crt memcache_tls_enabled = true project_domain_name = Default project_name = service user_domain_name = Default password = pa**********rd username = watcher auth_url = https://keystone-internal.watcher-kuttl-default.svc:5000 interface = internal auth_type = password cafile = /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem '[watcher_clients_auth]' project_domain_name = Default project_name = service user_domain_name = Default password = pa**********rd username = watcher auth_url = https://keystone-internal.watcher-kuttl-default.svc:5000 interface = internal auth_type = password cafile = /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem '[oslo_concurrency]' lock_path = /var/lib/watcher/tmp '[watcher_datasources]' datasources = prometheus '[cache]' backend = oslo_cache.memcache_pool memcache_servers=memcached-0.memcached.watcher-kuttl-default.svc:11212 memcache_socket_timeout = 0.5 memcache_pool_connection_get_timeout = 1 enabled=true tls_enabled=true tls_certfile=/etc/pki/tls/certs/mtls.crt tls_keyfile=/etc/pki/tls/private/mtls.key tls_cafile=/etc/pki/tls/certs/mtls-ca.crt memcache_dead_retry = 30 '[cinder_client]' endpoint_type = internal '[glance_client]' endpoint_type = internal '[ironic_client]' endpoint_type = internal '[keystone_client]' interface = internal '[neutron_client]' endpoint_type = internal '[nova_client]' endpoint_type = internal '[placement_client]' interface = internal '[watcher_cluster_data_model_collectors.compute]' period = 900 '[watcher_cluster_data_model_collectors.baremetal]' period = 900 '[watcher_cluster_data_model_collectors.storage]' period = 900\n logger.go:42: 07:01:54 | watcher-tls/2-patch-mtls | + '[' 1 == 1 ']'\n logger.go:42: 07:01:54 | watcher-tls/2-patch-mtls | ++ grep -c 'memcache_tls_cafile = /etc/pki/tls/certs/mtls-ca.crt'\n logger.go:42: 07:01:54 | watcher-tls/2-patch-mtls | +++ oc rsh -c watcher-applier watcher-kuttl-applier-0 cat /etc/watcher/watcher.conf.d/00-default.conf\n \ logger.go:42: 07:01:55 | watcher-tls/2-patch-mtls | ++ echo '[DEFAULT]' state_path = /var/lib/watcher transport_url = 'rabbit://**********=1' control_exchange = watcher debug = True '#' empty notification_level means that no notification will be sent notification_level = '[database]' connection = 'mysql+pymysql://watcher_95d6:0107df271c530ebe460a742c5441b891@openstack.watcher-kuttl-default.svc/watcher?read_default_file=/etc/my.cnf' '[oslo_policy]' policy_file = /etc/watcher/policy.yaml.sample '[oslo_messaging_notifications]' driver = noop '[oslo_messaging_rabbit]' rabbit_quorum_queue=true rabbit_transient_quorum_queue=true amqp_durable_queues=true '[keystone_authtoken]' memcached_servers=memcached-0.memcached.watcher-kuttl-default.svc:11212 tls_enabled=true memcache_tls_certfile = /etc/pki/tls/certs/mtls.crt memcache_tls_keyfile = /etc/pki/tls/private/mtls.key memcache_tls_cafile = /etc/pki/tls/certs/mtls-ca.crt memcache_tls_enabled = true project_domain_name = Default project_name = service user_domain_name = Default password = pa**********rd username = watcher auth_url = https://keystone-internal.watcher-kuttl-default.svc:5000 interface = internal auth_type = password cafile = /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem '[watcher_clients_auth]' project_domain_name = Default project_name = service user_domain_name = Default password = pa**********rd username = watcher auth_url = https://keystone-internal.watcher-kuttl-default.svc:5000 interface = internal auth_type = password cafile = /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem '[oslo_concurrency]' lock_path = /var/lib/watcher/tmp '[watcher_datasources]' datasources = prometheus '[cache]' backend = oslo_cache.memcache_pool memcache_servers=memcached-0.memcached.watcher-kuttl-default.svc:11212 memcache_socket_timeout = 0.5 memcache_pool_connection_get_timeout = 1 enabled=true tls_enabled=true tls_certfile=/etc/pki/tls/certs/mtls.crt tls_keyfile=/etc/pki/tls/private/mtls.key tls_cafile=/etc/pki/tls/certs/mtls-ca.crt memcache_dead_retry = 30 '[cinder_client]' endpoint_type = internal '[glance_client]' endpoint_type = internal '[ironic_client]' endpoint_type = internal '[keystone_client]' interface = internal '[neutron_client]' endpoint_type = internal '[nova_client]' endpoint_type = internal '[placement_client]' interface = internal '[watcher_cluster_data_model_collectors.compute]' period = 900 '[watcher_cluster_data_model_collectors.baremetal]' period = 900 '[watcher_cluster_data_model_collectors.storage]' period = 900\n logger.go:42: 07:01:55 | watcher-tls/2-patch-mtls | + '[' 1 == 1 ']'\n logger.go:42: 07:01:55 | watcher-tls/2-patch-mtls | ++ grep -c 'memcache_tls_enabled = true'\n logger.go:42: 07:01:55 | watcher-tls/2-patch-mtls | +++ oc rsh -c watcher-applier watcher-kuttl-applier-0 cat /etc/watcher/watcher.conf.d/00-default.conf\n logger.go:42: 07:01:55 | watcher-tls/2-patch-mtls | ++ echo '[DEFAULT]' state_path = /var/lib/watcher transport_url = 'rabbit://**********=1' control_exchange = watcher debug = True '#' empty notification_level means that no notification will be sent notification_level = '[database]' connection = 'mysql+pymysql://watcher_95d6:0107df271c530ebe460a742c5441b891@openstack.watcher-kuttl-default.svc/watcher?read_default_file=/etc/my.cnf' '[oslo_policy]' policy_file = /etc/watcher/policy.yaml.sample '[oslo_messaging_notifications]' driver = noop '[oslo_messaging_rabbit]' rabbit_quorum_queue=true rabbit_transient_quorum_queue=true amqp_durable_queues=true '[keystone_authtoken]' memcached_servers=memcached-0.memcached.watcher-kuttl-default.svc:11212 tls_enabled=true memcache_tls_certfile = /etc/pki/tls/certs/mtls.crt memcache_tls_keyfile = /etc/pki/tls/private/mtls.key memcache_tls_cafile = /etc/pki/tls/certs/mtls-ca.crt memcache_tls_enabled = true project_domain_name = Default project_name = service user_domain_name = Default password = pa**********rd username = watcher auth_url = https://keystone-internal.watcher-kuttl-default.svc:5000 interface = internal auth_type = password cafile = /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem '[watcher_clients_auth]' project_domain_name = Default project_name = service user_domain_name = Default password = pa**********rd username = watcher auth_url = https://keystone-internal.watcher-kuttl-default.svc:5000 interface = internal auth_type = password cafile = /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem '[oslo_concurrency]' lock_path = /var/lib/watcher/tmp '[watcher_datasources]' datasources = prometheus '[cache]' backend = oslo_cache.memcache_pool memcache_servers=memcached-0.memcached.watcher-kuttl-default.svc:11212 memcache_socket_timeout = 0.5 memcache_pool_connection_get_timeout = 1 enabled=true tls_enabled=true tls_certfile=/etc/pki/tls/certs/mtls.crt tls_keyfile=/etc/pki/tls/private/mtls.key tls_cafile=/etc/pki/tls/certs/mtls-ca.crt memcache_dead_retry = 30 '[cinder_client]' endpoint_type = internal '[glance_client]' endpoint_type = internal '[ironic_client]' endpoint_type = internal '[keystone_client]' interface = internal '[neutron_client]' endpoint_type = internal '[nova_client]' endpoint_type = internal '[placement_client]' interface = internal '[watcher_cluster_data_model_collectors.compute]' period = 900 '[watcher_cluster_data_model_collectors.baremetal]' period = 900 '[watcher_cluster_data_model_collectors.storage]' period = 900\n logger.go:42: 07:01:55 | watcher-tls/2-patch-mtls | + '[' 1 == 1 ']'\n logger.go:42: 07:01:55 | watcher-tls/2-patch-mtls | ++ grep -c tls_certfile=/etc/pki/tls/certs/mtls.crt\n \ logger.go:42: 07:01:55 | watcher-tls/2-patch-mtls | +++ oc rsh -c watcher-applier watcher-kuttl-applier-0 cat /etc/watcher/watcher.conf.d/00-default.conf\n logger.go:42: 07:01:55 | watcher-tls/2-patch-mtls | ++ echo '[DEFAULT]' state_path = /var/lib/watcher transport_url = 'rabbit://**********=1' control_exchange = watcher debug = True '#' empty notification_level means that no notification will be sent notification_level = '[database]' connection = 'mysql+pymysql://watcher_95d6:0107df271c530ebe460a742c5441b891@openstack.watcher-kuttl-default.svc/watcher?read_default_file=/etc/my.cnf' '[oslo_policy]' policy_file = /etc/watcher/policy.yaml.sample '[oslo_messaging_notifications]' driver = noop '[oslo_messaging_rabbit]' rabbit_quorum_queue=true rabbit_transient_quorum_queue=true amqp_durable_queues=true '[keystone_authtoken]' memcached_servers=memcached-0.memcached.watcher-kuttl-default.svc:11212 tls_enabled=true memcache_tls_certfile = /etc/pki/tls/certs/mtls.crt memcache_tls_keyfile = /etc/pki/tls/private/mtls.key memcache_tls_cafile = /etc/pki/tls/certs/mtls-ca.crt memcache_tls_enabled = true project_domain_name = Default project_name = service user_domain_name = Default password = pa**********rd username = watcher auth_url = https://keystone-internal.watcher-kuttl-default.svc:5000 interface = internal auth_type = password cafile = /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem '[watcher_clients_auth]' project_domain_name = Default project_name = service user_domain_name = Default password = pa**********rd username = watcher auth_url = https://keystone-internal.watcher-kuttl-default.svc:5000 interface = internal auth_type = password cafile = /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem '[oslo_concurrency]' lock_path = /var/lib/watcher/tmp '[watcher_datasources]' datasources = prometheus '[cache]' backend = oslo_cache.memcache_pool memcache_servers=memcached-0.memcached.watcher-kuttl-default.svc:11212 memcache_socket_timeout = 0.5 memcache_pool_connection_get_timeout = 1 enabled=true tls_enabled=true tls_certfile=/etc/pki/tls/certs/mtls.crt tls_keyfile=/etc/pki/tls/private/mtls.key tls_cafile=/etc/pki/tls/certs/mtls-ca.crt memcache_dead_retry = 30 '[cinder_client]' endpoint_type = internal '[glance_client]' endpoint_type = internal '[ironic_client]' endpoint_type = internal '[keystone_client]' interface = internal '[neutron_client]' endpoint_type = internal '[nova_client]' endpoint_type = internal '[placement_client]' interface = internal '[watcher_cluster_data_model_collectors.compute]' period = 900 '[watcher_cluster_data_model_collectors.baremetal]' period = 900 '[watcher_cluster_data_model_collectors.storage]' period = 900\n logger.go:42: 07:01:55 | watcher-tls/2-patch-mtls | + '[' 1 == 1 ']'\n logger.go:42: 07:01:55 | watcher-tls/2-patch-mtls | ++ grep -c tls_keyfile=/etc/pki/tls/private/mtls.key\n \ logger.go:42: 07:01:55 | watcher-tls/2-patch-mtls | +++ oc rsh -c watcher-applier watcher-kuttl-applier-0 cat /etc/watcher/watcher.conf.d/00-default.conf\n logger.go:42: 07:01:55 | watcher-tls/2-patch-mtls | ++ echo '[DEFAULT]' state_path = /var/lib/watcher transport_url = 'rabbit://**********=1' control_exchange = watcher debug = True '#' empty notification_level means that no notification will be sent notification_level = '[database]' connection = 'mysql+pymysql://watcher_95d6:0107df271c530ebe460a742c5441b891@openstack.watcher-kuttl-default.svc/watcher?read_default_file=/etc/my.cnf' '[oslo_policy]' policy_file = /etc/watcher/policy.yaml.sample '[oslo_messaging_notifications]' driver = noop '[oslo_messaging_rabbit]' rabbit_quorum_queue=true rabbit_transient_quorum_queue=true amqp_durable_queues=true '[keystone_authtoken]' memcached_servers=memcached-0.memcached.watcher-kuttl-default.svc:11212 tls_enabled=true memcache_tls_certfile = /etc/pki/tls/certs/mtls.crt memcache_tls_keyfile = /etc/pki/tls/private/mtls.key memcache_tls_cafile = /etc/pki/tls/certs/mtls-ca.crt memcache_tls_enabled = true project_domain_name = Default project_name = service user_domain_name = Default password = pa**********rd username = watcher auth_url = https://keystone-internal.watcher-kuttl-default.svc:5000 interface = internal auth_type = password cafile = /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem '[watcher_clients_auth]' project_domain_name = Default project_name = service user_domain_name = Default password = pa**********rd username = watcher auth_url = https://keystone-internal.watcher-kuttl-default.svc:5000 interface = internal auth_type = password cafile = /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem '[oslo_concurrency]' lock_path = /var/lib/watcher/tmp '[watcher_datasources]' datasources = prometheus '[cache]' backend = oslo_cache.memcache_pool memcache_servers=memcached-0.memcached.watcher-kuttl-default.svc:11212 memcache_socket_timeout = 0.5 memcache_pool_connection_get_timeout = 1 enabled=true tls_enabled=true tls_certfile=/etc/pki/tls/certs/mtls.crt tls_keyfile=/etc/pki/tls/private/mtls.key tls_cafile=/etc/pki/tls/certs/mtls-ca.crt memcache_dead_retry = 30 '[cinder_client]' endpoint_type = internal '[glance_client]' endpoint_type = internal '[ironic_client]' endpoint_type = internal '[keystone_client]' interface = internal '[neutron_client]' endpoint_type = internal '[nova_client]' endpoint_type = internal '[placement_client]' interface = internal '[watcher_cluster_data_model_collectors.compute]' period = 900 '[watcher_cluster_data_model_collectors.baremetal]' period = 900 '[watcher_cluster_data_model_collectors.storage]' period = 900\n logger.go:42: 07:01:55 | watcher-tls/2-patch-mtls | + '[' 1 == 1 ']'\n logger.go:42: 07:01:55 | watcher-tls/2-patch-mtls | ++ grep -c tls_cafile=/etc/pki/tls/certs/mtls-ca.crt\n \ logger.go:42: 07:01:55 | watcher-tls/2-patch-mtls | +++ oc rsh -c watcher-applier watcher-kuttl-applier-0 cat /etc/watcher/watcher.conf.d/00-default.conf\n logger.go:42: 07:01:55 | watcher-tls/2-patch-mtls | ++ echo '[DEFAULT]' state_path = /var/lib/watcher transport_url = 'rabbit://**********=1' control_exchange = watcher debug = True '#' empty notification_level means that no notification will be sent notification_level = '[database]' connection = 'mysql+pymysql://watcher_95d6:0107df271c530ebe460a742c5441b891@openstack.watcher-kuttl-default.svc/watcher?read_default_file=/etc/my.cnf' '[oslo_policy]' policy_file = /etc/watcher/policy.yaml.sample '[oslo_messaging_notifications]' driver = noop '[oslo_messaging_rabbit]' rabbit_quorum_queue=true rabbit_transient_quorum_queue=true amqp_durable_queues=true '[keystone_authtoken]' memcached_servers=memcached-0.memcached.watcher-kuttl-default.svc:11212 tls_enabled=true memcache_tls_certfile = /etc/pki/tls/certs/mtls.crt memcache_tls_keyfile = /etc/pki/tls/private/mtls.key memcache_tls_cafile = /etc/pki/tls/certs/mtls-ca.crt memcache_tls_enabled = true project_domain_name = Default project_name = service user_domain_name = Default password = pa**********rd username = watcher auth_url = https://keystone-internal.watcher-kuttl-default.svc:5000 interface = internal auth_type = password cafile = /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem '[watcher_clients_auth]' project_domain_name = Default project_name = service user_domain_name = Default password = pa**********rd username = watcher auth_url = https://keystone-internal.watcher-kuttl-default.svc:5000 interface = internal auth_type = password cafile = /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem '[oslo_concurrency]' lock_path = /var/lib/watcher/tmp '[watcher_datasources]' datasources = prometheus '[cache]' backend = oslo_cache.memcache_pool memcache_servers=memcached-0.memcached.watcher-kuttl-default.svc:11212 memcache_socket_timeout = 0.5 memcache_pool_connection_get_timeout = 1 enabled=true tls_enabled=true tls_certfile=/etc/pki/tls/certs/mtls.crt tls_keyfile=/etc/pki/tls/private/mtls.key tls_cafile=/etc/pki/tls/certs/mtls-ca.crt memcache_dead_retry = 30 '[cinder_client]' endpoint_type = internal '[glance_client]' endpoint_type = internal '[ironic_client]' endpoint_type = internal '[keystone_client]' interface = internal '[neutron_client]' endpoint_type = internal '[nova_client]' endpoint_type = internal '[placement_client]' interface = internal '[watcher_cluster_data_model_collectors.compute]' period = 900 '[watcher_cluster_data_model_collectors.baremetal]' period = 900 '[watcher_cluster_data_model_collectors.storage]' period = 900\n logger.go:42: 07:01:55 | watcher-tls/2-patch-mtls | + '[' 1 == 1 ']'\n logger.go:42: 07:01:55 | watcher-tls/2-patch-mtls | + '[' -n watcher-kuttl-decision-engine-0 ']'\n logger.go:42: 07:01:55 | watcher-tls/2-patch-mtls | + echo 'Checking watcher-decision-engine config...'\n logger.go:42: 07:01:55 | watcher-tls/2-patch-mtls | Checking watcher-decision-engine config...\n logger.go:42: 07:01:55 | watcher-tls/2-patch-mtls | ++ grep -c 'memcache_tls_certfile = /etc/pki/tls/certs/mtls.crt '\n logger.go:42: 07:01:55 | watcher-tls/2-patch-mtls | +++ oc rsh -c watcher-decision-engine watcher-kuttl-decision-engine-0 cat /etc/watcher/watcher.conf.d/00-default.conf\n \ logger.go:42: 07:01:56 | watcher-tls/2-patch-mtls | ++ echo '[DEFAULT]' state_path = /var/lib/watcher transport_url = 'rabbit://**********=1' control_exchange = watcher debug = True '#' empty notification_level means that no notification will be sent notification_level = '[database]' connection = 'mysql+pymysql://watcher_95d6:0107df271c530ebe460a742c5441b891@openstack.watcher-kuttl-default.svc/watcher?read_default_file=/etc/my.cnf' '[oslo_policy]' policy_file = /etc/watcher/policy.yaml.sample '[oslo_messaging_notifications]' driver = noop '[oslo_messaging_rabbit]' rabbit_quorum_queue=true rabbit_transient_quorum_queue=true amqp_durable_queues=true '[keystone_authtoken]' memcached_servers=memcached-0.memcached.watcher-kuttl-default.svc:11212 tls_enabled=true memcache_tls_certfile = /etc/pki/tls/certs/mtls.crt memcache_tls_keyfile = /etc/pki/tls/private/mtls.key memcache_tls_cafile = /etc/pki/tls/certs/mtls-ca.crt memcache_tls_enabled = true project_domain_name = Default project_name = service user_domain_name = Default password = pa**********rd username = watcher auth_url = https://keystone-internal.watcher-kuttl-default.svc:5000 interface = internal auth_type = password cafile = /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem '[watcher_clients_auth]' project_domain_name = Default project_name = service user_domain_name = Default password = pa**********rd username = watcher auth_url = https://keystone-internal.watcher-kuttl-default.svc:5000 interface = internal auth_type = password cafile = /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem '[oslo_concurrency]' lock_path = /var/lib/watcher/tmp '[watcher_datasources]' datasources = prometheus '[cache]' backend = oslo_cache.memcache_pool memcache_servers=memcached-0.memcached.watcher-kuttl-default.svc:11212 memcache_socket_timeout = 0.5 memcache_pool_connection_get_timeout = 1 enabled=true tls_enabled=true tls_certfile=/etc/pki/tls/certs/mtls.crt tls_keyfile=/etc/pki/tls/private/mtls.key tls_cafile=/etc/pki/tls/certs/mtls-ca.crt memcache_dead_retry = 30 '[prometheus_client]' host = metric-storage-prometheus.watcher-kuttl-default.svc port = 9090 cafile = /etc/pki/ca-trust/extracted/pem/prometheus/ca.crt '[cinder_client]' endpoint_type = internal '[glance_client]' endpoint_type = internal '[ironic_client]' endpoint_type = internal '[keystone_client]' interface = internal '[neutron_client]' endpoint_type = internal '[nova_client]' endpoint_type = internal '[placement_client]' interface = internal '[watcher_cluster_data_model_collectors.compute]' period = 900 '[watcher_cluster_data_model_collectors.baremetal]' period = 900 '[watcher_cluster_data_model_collectors.storage]' period = 900\n logger.go:42: 07:01:56 | watcher-tls/2-patch-mtls | + '[' 1 == 1 ']'\n logger.go:42: 07:01:56 | watcher-tls/2-patch-mtls | ++ grep -c 'memcache_tls_keyfile = /etc/pki/tls/private/mtls.key'\n \ logger.go:42: 07:01:56 | watcher-tls/2-patch-mtls | +++ oc rsh -c watcher-decision-engine watcher-kuttl-decision-engine-0 cat /etc/watcher/watcher.conf.d/00-default.conf\n \ logger.go:42: 07:01:56 | watcher-tls/2-patch-mtls | ++ echo '[DEFAULT]' state_path = /var/lib/watcher transport_url = 'rabbit://**********=1' control_exchange = watcher debug = True '#' empty notification_level means that no notification will be sent notification_level = '[database]' connection = 'mysql+pymysql://watcher_95d6:0107df271c530ebe460a742c5441b891@openstack.watcher-kuttl-default.svc/watcher?read_default_file=/etc/my.cnf' '[oslo_policy]' policy_file = /etc/watcher/policy.yaml.sample '[oslo_messaging_notifications]' driver = noop '[oslo_messaging_rabbit]' rabbit_quorum_queue=true rabbit_transient_quorum_queue=true amqp_durable_queues=true '[keystone_authtoken]' memcached_servers=memcached-0.memcached.watcher-kuttl-default.svc:11212 tls_enabled=true memcache_tls_certfile = /etc/pki/tls/certs/mtls.crt memcache_tls_keyfile = /etc/pki/tls/private/mtls.key memcache_tls_cafile = /etc/pki/tls/certs/mtls-ca.crt memcache_tls_enabled = true project_domain_name = Default project_name = service user_domain_name = Default password = pa**********rd username = watcher auth_url = https://keystone-internal.watcher-kuttl-default.svc:5000 interface = internal auth_type = password cafile = /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem '[watcher_clients_auth]' project_domain_name = Default project_name = service user_domain_name = Default password = pa**********rd username = watcher auth_url = https://keystone-internal.watcher-kuttl-default.svc:5000 interface = internal auth_type = password cafile = /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem '[oslo_concurrency]' lock_path = /var/lib/watcher/tmp '[watcher_datasources]' datasources = prometheus '[cache]' backend = oslo_cache.memcache_pool memcache_servers=memcached-0.memcached.watcher-kuttl-default.svc:11212 memcache_socket_timeout = 0.5 memcache_pool_connection_get_timeout = 1 enabled=true tls_enabled=true tls_certfile=/etc/pki/tls/certs/mtls.crt tls_keyfile=/etc/pki/tls/private/mtls.key tls_cafile=/etc/pki/tls/certs/mtls-ca.crt memcache_dead_retry = 30 '[prometheus_client]' host = metric-storage-prometheus.watcher-kuttl-default.svc port = 9090 cafile = /etc/pki/ca-trust/extracted/pem/prometheus/ca.crt '[cinder_client]' endpoint_type = internal '[glance_client]' endpoint_type = internal '[ironic_client]' endpoint_type = internal '[keystone_client]' interface = internal '[neutron_client]' endpoint_type = internal '[nova_client]' endpoint_type = internal '[placement_client]' interface = internal '[watcher_cluster_data_model_collectors.compute]' period = 900 '[watcher_cluster_data_model_collectors.baremetal]' period = 900 '[watcher_cluster_data_model_collectors.storage]' period = 900\n logger.go:42: 07:01:56 | watcher-tls/2-patch-mtls | + '[' 1 == 1 ']'\n logger.go:42: 07:01:56 | watcher-tls/2-patch-mtls | ++ grep -c 'memcache_tls_cafile = /etc/pki/tls/certs/mtls-ca.crt'\n \ logger.go:42: 07:01:56 | watcher-tls/2-patch-mtls | +++ oc rsh -c watcher-decision-engine watcher-kuttl-decision-engine-0 cat /etc/watcher/watcher.conf.d/00-default.conf\n \ logger.go:42: 07:01:56 | watcher-tls/2-patch-mtls | ++ echo '[DEFAULT]' state_path = /var/lib/watcher transport_url = 'rabbit://**********=1' control_exchange = watcher debug = True '#' empty notification_level means that no notification will be sent notification_level = '[database]' connection = 'mysql+pymysql://watcher_95d6:0107df271c530ebe460a742c5441b891@openstack.watcher-kuttl-default.svc/watcher?read_default_file=/etc/my.cnf' '[oslo_policy]' policy_file = /etc/watcher/policy.yaml.sample '[oslo_messaging_notifications]' driver = noop '[oslo_messaging_rabbit]' rabbit_quorum_queue=true rabbit_transient_quorum_queue=true amqp_durable_queues=true '[keystone_authtoken]' memcached_servers=memcached-0.memcached.watcher-kuttl-default.svc:11212 tls_enabled=true memcache_tls_certfile = /etc/pki/tls/certs/mtls.crt memcache_tls_keyfile = /etc/pki/tls/private/mtls.key memcache_tls_cafile = /etc/pki/tls/certs/mtls-ca.crt memcache_tls_enabled = true project_domain_name = Default project_name = service user_domain_name = Default password = pa**********rd username = watcher auth_url = https://keystone-internal.watcher-kuttl-default.svc:5000 interface = internal auth_type = password cafile = /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem '[watcher_clients_auth]' project_domain_name = Default project_name = service user_domain_name = Default password = pa**********rd username = watcher auth_url = https://keystone-internal.watcher-kuttl-default.svc:5000 interface = internal auth_type = password cafile = /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem '[oslo_concurrency]' lock_path = /var/lib/watcher/tmp '[watcher_datasources]' datasources = prometheus '[cache]' backend = oslo_cache.memcache_pool memcache_servers=memcached-0.memcached.watcher-kuttl-default.svc:11212 memcache_socket_timeout = 0.5 memcache_pool_connection_get_timeout = 1 enabled=true tls_enabled=true tls_certfile=/etc/pki/tls/certs/mtls.crt tls_keyfile=/etc/pki/tls/private/mtls.key tls_cafile=/etc/pki/tls/certs/mtls-ca.crt memcache_dead_retry = 30 '[prometheus_client]' host = metric-storage-prometheus.watcher-kuttl-default.svc port = 9090 cafile = /etc/pki/ca-trust/extracted/pem/prometheus/ca.crt '[cinder_client]' endpoint_type = internal '[glance_client]' endpoint_type = internal '[ironic_client]' endpoint_type = internal '[keystone_client]' interface = internal '[neutron_client]' endpoint_type = internal '[nova_client]' endpoint_type = internal '[placement_client]' interface = internal '[watcher_cluster_data_model_collectors.compute]' period = 900 '[watcher_cluster_data_model_collectors.baremetal]' period = 900 '[watcher_cluster_data_model_collectors.storage]' period = 900\n logger.go:42: 07:01:56 | watcher-tls/2-patch-mtls | + '[' 1 == 1 ']'\n logger.go:42: 07:01:56 | watcher-tls/2-patch-mtls | ++ grep -c 'memcache_tls_enabled = true'\n logger.go:42: 07:01:56 | watcher-tls/2-patch-mtls | +++ oc rsh -c watcher-decision-engine watcher-kuttl-decision-engine-0 cat /etc/watcher/watcher.conf.d/00-default.conf\n logger.go:42: 07:01:56 | watcher-tls/2-patch-mtls | ++ echo '[DEFAULT]' state_path = /var/lib/watcher transport_url = 'rabbit://**********=1' control_exchange = watcher debug = True '#' empty notification_level means that no notification will be sent notification_level = '[database]' connection = 'mysql+pymysql://watcher_95d6:0107df271c530ebe460a742c5441b891@openstack.watcher-kuttl-default.svc/watcher?read_default_file=/etc/my.cnf' '[oslo_policy]' policy_file = /etc/watcher/policy.yaml.sample '[oslo_messaging_notifications]' driver = noop '[oslo_messaging_rabbit]' rabbit_quorum_queue=true rabbit_transient_quorum_queue=true amqp_durable_queues=true '[keystone_authtoken]' memcached_servers=memcached-0.memcached.watcher-kuttl-default.svc:11212 tls_enabled=true memcache_tls_certfile = /etc/pki/tls/certs/mtls.crt memcache_tls_keyfile = /etc/pki/tls/private/mtls.key memcache_tls_cafile = /etc/pki/tls/certs/mtls-ca.crt memcache_tls_enabled = true project_domain_name = Default project_name = service user_domain_name = Default password = pa**********rd username = watcher auth_url = https://keystone-internal.watcher-kuttl-default.svc:5000 interface = internal auth_type = password cafile = /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem '[watcher_clients_auth]' project_domain_name = Default project_name = service user_domain_name = Default password = pa**********rd username = watcher auth_url = https://keystone-internal.watcher-kuttl-default.svc:5000 interface = internal auth_type = password cafile = /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem '[oslo_concurrency]' lock_path = /var/lib/watcher/tmp '[watcher_datasources]' datasources = prometheus '[cache]' backend = oslo_cache.memcache_pool memcache_servers=memcached-0.memcached.watcher-kuttl-default.svc:11212 memcache_socket_timeout = 0.5 memcache_pool_connection_get_timeout = 1 enabled=true tls_enabled=true tls_certfile=/etc/pki/tls/certs/mtls.crt tls_keyfile=/etc/pki/tls/private/mtls.key tls_cafile=/etc/pki/tls/certs/mtls-ca.crt memcache_dead_retry = 30 '[prometheus_client]' host = metric-storage-prometheus.watcher-kuttl-default.svc port = 9090 cafile = /etc/pki/ca-trust/extracted/pem/prometheus/ca.crt '[cinder_client]' endpoint_type = internal '[glance_client]' endpoint_type = internal '[ironic_client]' endpoint_type = internal '[keystone_client]' interface = internal '[neutron_client]' endpoint_type = internal '[nova_client]' endpoint_type = internal '[placement_client]' interface = internal '[watcher_cluster_data_model_collectors.compute]' period = 900 '[watcher_cluster_data_model_collectors.baremetal]' period = 900 '[watcher_cluster_data_model_collectors.storage]' period = 900\n logger.go:42: 07:01:56 | watcher-tls/2-patch-mtls | + '[' 1 == 1 ']'\n logger.go:42: 07:01:56 | watcher-tls/2-patch-mtls | ++ grep -c tls_certfile=/etc/pki/tls/certs/mtls.crt\n \ logger.go:42: 07:01:56 | watcher-tls/2-patch-mtls | +++ oc rsh -c watcher-decision-engine watcher-kuttl-decision-engine-0 cat /etc/watcher/watcher.conf.d/00-default.conf\n \ logger.go:42: 07:01:57 | watcher-tls/2-patch-mtls | ++ echo '[DEFAULT]' state_path = /var/lib/watcher transport_url = 'rabbit://**********=1' control_exchange = watcher debug = True '#' empty notification_level means that no notification will be sent notification_level = '[database]' connection = 'mysql+pymysql://watcher_95d6:0107df271c530ebe460a742c5441b891@openstack.watcher-kuttl-default.svc/watcher?read_default_file=/etc/my.cnf' '[oslo_policy]' policy_file = /etc/watcher/policy.yaml.sample '[oslo_messaging_notifications]' driver = noop '[oslo_messaging_rabbit]' rabbit_quorum_queue=true rabbit_transient_quorum_queue=true amqp_durable_queues=true '[keystone_authtoken]' memcached_servers=memcached-0.memcached.watcher-kuttl-default.svc:11212 tls_enabled=true memcache_tls_certfile = /etc/pki/tls/certs/mtls.crt memcache_tls_keyfile = /etc/pki/tls/private/mtls.key memcache_tls_cafile = /etc/pki/tls/certs/mtls-ca.crt memcache_tls_enabled = true project_domain_name = Default project_name = service user_domain_name = Default password = pa**********rd username = watcher auth_url = https://keystone-internal.watcher-kuttl-default.svc:5000 interface = internal auth_type = password cafile = /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem '[watcher_clients_auth]' project_domain_name = Default project_name = service user_domain_name = Default password = pa**********rd username = watcher auth_url = https://keystone-internal.watcher-kuttl-default.svc:5000 interface = internal auth_type = password cafile = /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem '[oslo_concurrency]' lock_path = /var/lib/watcher/tmp '[watcher_datasources]' datasources = prometheus '[cache]' backend = oslo_cache.memcache_pool memcache_servers=memcached-0.memcached.watcher-kuttl-default.svc:11212 memcache_socket_timeout = 0.5 memcache_pool_connection_get_timeout = 1 enabled=true tls_enabled=true tls_certfile=/etc/pki/tls/certs/mtls.crt tls_keyfile=/etc/pki/tls/private/mtls.key tls_cafile=/etc/pki/tls/certs/mtls-ca.crt memcache_dead_retry = 30 '[prometheus_client]' host = metric-storage-prometheus.watcher-kuttl-default.svc port = 9090 cafile = /etc/pki/ca-trust/extracted/pem/prometheus/ca.crt '[cinder_client]' endpoint_type = internal '[glance_client]' endpoint_type = internal '[ironic_client]' endpoint_type = internal '[keystone_client]' interface = internal '[neutron_client]' endpoint_type = internal '[nova_client]' endpoint_type = internal '[placement_client]' interface = internal '[watcher_cluster_data_model_collectors.compute]' period = 900 '[watcher_cluster_data_model_collectors.baremetal]' period = 900 '[watcher_cluster_data_model_collectors.storage]' period = 900\n logger.go:42: 07:01:57 | watcher-tls/2-patch-mtls | + '[' 1 == 1 ']'\n logger.go:42: 07:01:57 | watcher-tls/2-patch-mtls | ++ grep -c tls_keyfile=/etc/pki/tls/private/mtls.key\n \ logger.go:42: 07:01:57 | watcher-tls/2-patch-mtls | +++ oc rsh -c watcher-decision-engine watcher-kuttl-decision-engine-0 cat /etc/watcher/watcher.conf.d/00-default.conf\n \ logger.go:42: 07:01:57 | watcher-tls/2-patch-mtls | ++ echo '[DEFAULT]' state_path = /var/lib/watcher transport_url = 'rabbit://**********=1' control_exchange = watcher debug = True '#' empty notification_level means that no notification will be sent notification_level = '[database]' connection = 'mysql+pymysql://watcher_95d6:0107df271c530ebe460a742c5441b891@openstack.watcher-kuttl-default.svc/watcher?read_default_file=/etc/my.cnf' '[oslo_policy]' policy_file = /etc/watcher/policy.yaml.sample '[oslo_messaging_notifications]' driver = noop '[oslo_messaging_rabbit]' rabbit_quorum_queue=true rabbit_transient_quorum_queue=true amqp_durable_queues=true '[keystone_authtoken]' memcached_servers=memcached-0.memcached.watcher-kuttl-default.svc:11212 tls_enabled=true memcache_tls_certfile = /etc/pki/tls/certs/mtls.crt memcache_tls_keyfile = /etc/pki/tls/private/mtls.key memcache_tls_cafile = /etc/pki/tls/certs/mtls-ca.crt memcache_tls_enabled = true project_domain_name = Default project_name = service user_domain_name = Default password = pa**********rd username = watcher auth_url = https://keystone-internal.watcher-kuttl-default.svc:5000 interface = internal auth_type = password cafile = /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem '[watcher_clients_auth]' project_domain_name = Default project_name = service user_domain_name = Default password = pa**********rd username = watcher auth_url = https://keystone-internal.watcher-kuttl-default.svc:5000 interface = internal auth_type = password cafile = /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem '[oslo_concurrency]' lock_path = /var/lib/watcher/tmp '[watcher_datasources]' datasources = prometheus '[cache]' backend = oslo_cache.memcache_pool memcache_servers=memcached-0.memcached.watcher-kuttl-default.svc:11212 memcache_socket_timeout = 0.5 memcache_pool_connection_get_timeout = 1 enabled=true tls_enabled=true tls_certfile=/etc/pki/tls/certs/mtls.crt tls_keyfile=/etc/pki/tls/private/mtls.key tls_cafile=/etc/pki/tls/certs/mtls-ca.crt memcache_dead_retry = 30 '[prometheus_client]' host = metric-storage-prometheus.watcher-kuttl-default.svc port = 9090 cafile = /etc/pki/ca-trust/extracted/pem/prometheus/ca.crt '[cinder_client]' endpoint_type = internal '[glance_client]' endpoint_type = internal '[ironic_client]' endpoint_type = internal '[keystone_client]' interface = internal '[neutron_client]' endpoint_type = internal '[nova_client]' endpoint_type = internal '[placement_client]' interface = internal '[watcher_cluster_data_model_collectors.compute]' period = 900 '[watcher_cluster_data_model_collectors.baremetal]' period = 900 '[watcher_cluster_data_model_collectors.storage]' period = 900\n logger.go:42: 07:01:57 | watcher-tls/2-patch-mtls | + '[' 1 == 1 ']'\n logger.go:42: 07:01:57 | watcher-tls/2-patch-mtls | ++ grep -c tls_cafile=/etc/pki/tls/certs/mtls-ca.crt\n \ logger.go:42: 07:01:57 | watcher-tls/2-patch-mtls | +++ oc rsh -c watcher-decision-engine watcher-kuttl-decision-engine-0 cat /etc/watcher/watcher.conf.d/00-default.conf\n \ logger.go:42: 07:01:57 | watcher-tls/2-patch-mtls | ++ echo '[DEFAULT]' state_path = /var/lib/watcher transport_url = 'rabbit://**********=1' control_exchange = watcher debug = True '#' empty notification_level means that no notification will be sent notification_level = '[database]' connection = 'mysql+pymysql://watcher_95d6:0107df271c530ebe460a742c5441b891@openstack.watcher-kuttl-default.svc/watcher?read_default_file=/etc/my.cnf' '[oslo_policy]' policy_file = /etc/watcher/policy.yaml.sample '[oslo_messaging_notifications]' driver = noop '[oslo_messaging_rabbit]' rabbit_quorum_queue=true rabbit_transient_quorum_queue=true amqp_durable_queues=true '[keystone_authtoken]' memcached_servers=memcached-0.memcached.watcher-kuttl-default.svc:11212 tls_enabled=true memcache_tls_certfile = /etc/pki/tls/certs/mtls.crt memcache_tls_keyfile = /etc/pki/tls/private/mtls.key memcache_tls_cafile = /etc/pki/tls/certs/mtls-ca.crt memcache_tls_enabled = true project_domain_name = Default project_name = service user_domain_name = Default password = pa**********rd username = watcher auth_url = https://keystone-internal.watcher-kuttl-default.svc:5000 interface = internal auth_type = password cafile = /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem '[watcher_clients_auth]' project_domain_name = Default project_name = service user_domain_name = Default password = pa**********rd username = watcher auth_url = https://keystone-internal.watcher-kuttl-default.svc:5000 interface = internal auth_type = password cafile = /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem '[oslo_concurrency]' lock_path = /var/lib/watcher/tmp '[watcher_datasources]' datasources = prometheus '[cache]' backend = oslo_cache.memcache_pool memcache_servers=memcached-0.memcached.watcher-kuttl-default.svc:11212 memcache_socket_timeout = 0.5 memcache_pool_connection_get_timeout = 1 enabled=true tls_enabled=true tls_certfile=/etc/pki/tls/certs/mtls.crt tls_keyfile=/etc/pki/tls/private/mtls.key tls_cafile=/etc/pki/tls/certs/mtls-ca.crt memcache_dead_retry = 30 '[prometheus_client]' host = metric-storage-prometheus.watcher-kuttl-default.svc port = 9090 cafile = /etc/pki/ca-trust/extracted/pem/prometheus/ca.crt '[cinder_client]' endpoint_type = internal '[glance_client]' endpoint_type = internal '[ironic_client]' endpoint_type = internal '[keystone_client]' interface = internal '[neutron_client]' endpoint_type = internal '[nova_client]' endpoint_type = internal '[placement_client]' interface = internal '[watcher_cluster_data_model_collectors.compute]' period = 900 '[watcher_cluster_data_model_collectors.baremetal]' period = 900 '[watcher_cluster_data_model_collectors.storage]' period = 900\n logger.go:42: 07:01:57 | watcher-tls/2-patch-mtls | + '[' 1 == 1 ']'\n logger.go:42: 07:01:57 | watcher-tls/2-patch-mtls | test step completed 2-patch-mtls\n logger.go:42: 07:01:57 | watcher-tls/3-disable-podlevel-tls | starting test step 3-disable-podlevel-tls\n \ logger.go:42: 07:01:57 | watcher-tls/3-disable-podlevel-tls | running command: [sh -c oc patch watcher -n $NAMESPACE watcher-kuttl --type='json' -p='[{\"op\": \"replace\", \"path\": \"/spec/apiServiceTemplate/tls/api\", \"value\":{ \"internal\": {}, \"public\": {} }}]'\n ]\n logger.go:42: 07:01:57 | watcher-tls/3-disable-podlevel-tls | watcher.watcher.openstack.org/watcher-kuttl patched\n logger.go:42: 07:01:57 | watcher-tls/3-disable-podlevel-tls | running command: [sh -c set -euxo pipefail\n \ oc exec -n ${NAMESPACE} openstackclient -- openstack service list -f value -c Name -c Type |[ $(grep -c ^watcher) == 1 ]\n SERVICEID=$(oc exec -n ${NAMESPACE} openstackclient -- openstack service list -f value -c Name -c Type -c ID | grep watcher| awk '{print $1}')\n [ $(oc get -n ${NAMESPACE} keystoneservice watcher -o jsonpath={.status.serviceID}) == ${SERVICEID} ]\n [ -n \"$(oc get -n ${NAMESPACE} watcher watcher-kuttl -o jsonpath={.status.hash.dbsync})\" ]\n # check that watcher internal endpoint does not use https\n oc exec -n ${NAMESPACE} openstackclient -- openstack endpoint list | grep infra-optim | grep internal | [ $(grep -c https) == 0 ]\n # check that watcher public endpoint does use https\n oc exec -n ${NAMESPACE} openstackclient -- openstack endpoint list | grep infra-optim | grep public | [ $(grep -c https) == 1 ]\n # If we are running the container locally, skip following test\n if [ \"$(oc get pods -n openstack-operators -o name -l openstack.org/operator-name=watcher)\" == \"\" ]; then\n exit 0\n fi\n env_variables=$(oc set env $(oc get pods -n openstack-operators -o name -l openstack.org/operator-name=watcher) -n openstack-operators --list)\n \ counter=0\n for i in ${env_variables}; do\n if echo ${i} | grep '_URL_DEFAULT' &> /dev/null; then\n echo ${i}\n counter=$((counter + 1))\n fi\n done\n if [ ${counter} -lt 3 ]; then\n echo \"Error: Less than 3 _URL_DEFAULT variables found.\"\n exit 1\n else\n \ echo \"Success: ${counter} _URL_DEFAULT variables found.\"\n fi\n \ ]\n logger.go:42: 07:01:57 | watcher-tls/3-disable-podlevel-tls | + oc exec -n watcher-kuttl-default openstackclient -- openstack service list -f value -c Name -c Type\n logger.go:42: 07:01:57 | watcher-tls/3-disable-podlevel-tls | ++ grep -c '^watcher'\n logger.go:42: 07:02:01 | watcher-tls/3-disable-podlevel-tls | + '[' 1 == 1 ']'\n logger.go:42: 07:02:01 | watcher-tls/3-disable-podlevel-tls | ++ grep watcher\n logger.go:42: 07:02:01 | watcher-tls/3-disable-podlevel-tls | ++ oc exec -n watcher-kuttl-default openstackclient -- openstack service list -f value -c Name -c Type -c ID\n logger.go:42: 07:02:01 | watcher-tls/3-disable-podlevel-tls | ++ awk '{print $1}'\n logger.go:42: 07:02:03 | watcher-tls/3-disable-podlevel-tls | + SERVICEID=95cae6c02c914be89de8a64359351912\n logger.go:42: 07:02:03 | watcher-tls/3-disable-podlevel-tls | ++ oc get -n watcher-kuttl-default keystoneservice watcher -o 'jsonpath={.status.serviceID}'\n \ logger.go:42: 07:02:03 | watcher-tls/3-disable-podlevel-tls | + '[' 95cae6c02c914be89de8a64359351912 == 95cae6c02c914be89de8a64359351912 ']'\n logger.go:42: 07:02:03 | watcher-tls/3-disable-podlevel-tls | ++ oc get -n watcher-kuttl-default watcher watcher-kuttl -o 'jsonpath={.status.hash.dbsync}'\n \ logger.go:42: 07:02:04 | watcher-tls/3-disable-podlevel-tls | + '[' -n nbbh56bh4h647h5fbh67dh699h5bdh85hb7h65fh676h555h9bh5c4hf9h684h55dh674h564h565h6fh578hd9h56bh5fchcch687h68fh568h688h69q ']'\n logger.go:42: 07:02:04 | watcher-tls/3-disable-podlevel-tls | + grep internal\n \ logger.go:42: 07:02:04 | watcher-tls/3-disable-podlevel-tls | + grep infra-optim\n \ logger.go:42: 07:02:04 | watcher-tls/3-disable-podlevel-tls | + oc exec -n watcher-kuttl-default openstackclient -- openstack endpoint list\n logger.go:42: 07:02:04 | watcher-tls/3-disable-podlevel-tls | ++ grep -c https\n logger.go:42: 07:02:06 | watcher-tls/3-disable-podlevel-tls | + '[' 1 == 0 ']'\n logger.go:42: 07:02:07 | watcher-tls/3-disable-podlevel-tls | running command: [sh -c set -euxo pipefail\n oc exec -n ${NAMESPACE} openstackclient -- openstack service list -f value -c Name -c Type |[ $(grep -c ^watcher) == 1 ]\n \ SERVICEID=$(oc exec -n ${NAMESPACE} openstackclient -- openstack service list -f value -c Name -c Type -c ID | grep watcher| awk '{print $1}')\n [ $(oc get -n ${NAMESPACE} keystoneservice watcher -o jsonpath={.status.serviceID}) == ${SERVICEID} ]\n [ -n \"$(oc get -n ${NAMESPACE} watcher watcher-kuttl -o jsonpath={.status.hash.dbsync})\" ]\n # check that watcher internal endpoint does not use https\n oc exec -n ${NAMESPACE} openstackclient -- openstack endpoint list | grep infra-optim | grep internal | [ $(grep -c https) == 0 ]\n \ # check that watcher public endpoint does use https\n oc exec -n ${NAMESPACE} openstackclient -- openstack endpoint list | grep infra-optim | grep public | [ $(grep -c https) == 1 ]\n # If we are running the container locally, skip following test\n if [ \"$(oc get pods -n openstack-operators -o name -l openstack.org/operator-name=watcher)\" == \"\" ]; then\n exit 0\n \ fi\n env_variables=$(oc set env $(oc get pods -n openstack-operators -o name -l openstack.org/operator-name=watcher) -n openstack-operators --list)\n \ counter=0\n for i in ${env_variables}; do\n if echo ${i} | grep '_URL_DEFAULT' &> /dev/null; then\n echo ${i}\n counter=$((counter + 1))\n fi\n done\n if [ ${counter} -lt 3 ]; then\n echo \"Error: Less than 3 _URL_DEFAULT variables found.\"\n exit 1\n else\n \ echo \"Success: ${counter} _URL_DEFAULT variables found.\"\n fi\n \ ]\n logger.go:42: 07:02:07 | watcher-tls/3-disable-podlevel-tls | + oc exec -n watcher-kuttl-default openstackclient -- openstack service list -f value -c Name -c Type\n logger.go:42: 07:02:07 | watcher-tls/3-disable-podlevel-tls | ++ grep -c '^watcher'\n logger.go:42: 07:02:09 | watcher-tls/3-disable-podlevel-tls | + '[' 1 == 1 ']'\n logger.go:42: 07:02:09 | watcher-tls/3-disable-podlevel-tls | ++ oc exec -n watcher-kuttl-default openstackclient -- openstack service list -f value -c Name -c Type -c ID\n logger.go:42: 07:02:09 | watcher-tls/3-disable-podlevel-tls | ++ awk '{print $1}'\n logger.go:42: 07:02:09 | watcher-tls/3-disable-podlevel-tls | ++ grep watcher\n logger.go:42: 07:02:12 | watcher-tls/3-disable-podlevel-tls | + SERVICEID=95cae6c02c914be89de8a64359351912\n logger.go:42: 07:02:12 | watcher-tls/3-disable-podlevel-tls | ++ oc get -n watcher-kuttl-default keystoneservice watcher -o 'jsonpath={.status.serviceID}'\n \ logger.go:42: 07:02:12 | watcher-tls/3-disable-podlevel-tls | + '[' 95cae6c02c914be89de8a64359351912 == 95cae6c02c914be89de8a64359351912 ']'\n logger.go:42: 07:02:12 | watcher-tls/3-disable-podlevel-tls | ++ oc get -n watcher-kuttl-default watcher watcher-kuttl -o 'jsonpath={.status.hash.dbsync}'\n \ logger.go:42: 07:02:12 | watcher-tls/3-disable-podlevel-tls | + '[' -n nbbh56bh4h647h5fbh67dh699h5bdh85hb7h65fh676h555h9bh5c4hf9h684h55dh674h564h565h6fh578hd9h56bh5fchcch687h68fh568h688h69q ']'\n logger.go:42: 07:02:12 | watcher-tls/3-disable-podlevel-tls | + grep internal\n \ logger.go:42: 07:02:12 | watcher-tls/3-disable-podlevel-tls | ++ grep -c https\n \ logger.go:42: 07:02:12 | watcher-tls/3-disable-podlevel-tls | + grep infra-optim\n \ logger.go:42: 07:02:12 | watcher-tls/3-disable-podlevel-tls | + oc exec -n watcher-kuttl-default openstackclient -- openstack endpoint list\n logger.go:42: 07:02:14 | watcher-tls/3-disable-podlevel-tls | + '[' 1 == 0 ']'\n logger.go:42: 07:02:15 | watcher-tls/3-disable-podlevel-tls | running command: [sh -c set -euxo pipefail\n oc exec -n ${NAMESPACE} openstackclient -- openstack service list -f value -c Name -c Type |[ $(grep -c ^watcher) == 1 ]\n \ SERVICEID=$(oc exec -n ${NAMESPACE} openstackclient -- openstack service list -f value -c Name -c Type -c ID | grep watcher| awk '{print $1}')\n [ $(oc get -n ${NAMESPACE} keystoneservice watcher -o jsonpath={.status.serviceID}) == ${SERVICEID} ]\n [ -n \"$(oc get -n ${NAMESPACE} watcher watcher-kuttl -o jsonpath={.status.hash.dbsync})\" ]\n # check that watcher internal endpoint does not use https\n oc exec -n ${NAMESPACE} openstackclient -- openstack endpoint list | grep infra-optim | grep internal | [ $(grep -c https) == 0 ]\n \ # check that watcher public endpoint does use https\n oc exec -n ${NAMESPACE} openstackclient -- openstack endpoint list | grep infra-optim | grep public | [ $(grep -c https) == 1 ]\n # If we are running the container locally, skip following test\n if [ \"$(oc get pods -n openstack-operators -o name -l openstack.org/operator-name=watcher)\" == \"\" ]; then\n exit 0\n \ fi\n env_variables=$(oc set env $(oc get pods -n openstack-operators -o name -l openstack.org/operator-name=watcher) -n openstack-operators --list)\n \ counter=0\n for i in ${env_variables}; do\n if echo ${i} | grep '_URL_DEFAULT' &> /dev/null; then\n echo ${i}\n counter=$((counter + 1))\n fi\n done\n if [ ${counter} -lt 3 ]; then\n echo \"Error: Less than 3 _URL_DEFAULT variables found.\"\n exit 1\n else\n \ echo \"Success: ${counter} _URL_DEFAULT variables found.\"\n fi\n \ ]\n logger.go:42: 07:02:15 | watcher-tls/3-disable-podlevel-tls | + oc exec -n watcher-kuttl-default openstackclient -- openstack service list -f value -c Name -c Type\n logger.go:42: 07:02:15 | watcher-tls/3-disable-podlevel-tls | ++ grep -c '^watcher'\n logger.go:42: 07:02:18 | watcher-tls/3-disable-podlevel-tls | + '[' 1 == 1 ']'\n logger.go:42: 07:02:18 | watcher-tls/3-disable-podlevel-tls | ++ oc exec -n watcher-kuttl-default openstackclient -- openstack service list -f value -c Name -c Type -c ID\n logger.go:42: 07:02:18 | watcher-tls/3-disable-podlevel-tls | ++ awk '{print $1}'\n logger.go:42: 07:02:18 | watcher-tls/3-disable-podlevel-tls | ++ grep watcher\n logger.go:42: 07:02:20 | watcher-tls/3-disable-podlevel-tls | + SERVICEID=95cae6c02c914be89de8a64359351912\n logger.go:42: 07:02:20 | watcher-tls/3-disable-podlevel-tls | ++ oc get -n watcher-kuttl-default keystoneservice watcher -o 'jsonpath={.status.serviceID}'\n \ logger.go:42: 07:02:20 | watcher-tls/3-disable-podlevel-tls | + '[' 95cae6c02c914be89de8a64359351912 == 95cae6c02c914be89de8a64359351912 ']'\n logger.go:42: 07:02:20 | watcher-tls/3-disable-podlevel-tls | ++ oc get -n watcher-kuttl-default watcher watcher-kuttl -o 'jsonpath={.status.hash.dbsync}'\n \ logger.go:42: 07:02:20 | watcher-tls/3-disable-podlevel-tls | + '[' -n nbbh56bh4h647h5fbh67dh699h5bdh85hb7h65fh676h555h9bh5c4hf9h684h55dh674h564h565h6fh578hd9h56bh5fchcch687h68fh568h688h69q ']'\n logger.go:42: 07:02:20 | watcher-tls/3-disable-podlevel-tls | + oc exec -n watcher-kuttl-default openstackclient -- openstack endpoint list\n logger.go:42: 07:02:20 | watcher-tls/3-disable-podlevel-tls | + grep internal\n logger.go:42: 07:02:20 | watcher-tls/3-disable-podlevel-tls | + grep infra-optim\n logger.go:42: 07:02:20 | watcher-tls/3-disable-podlevel-tls | ++ grep -c https\n logger.go:42: 07:02:23 | watcher-tls/3-disable-podlevel-tls | + '[' 1 == 0 ']'\n logger.go:42: 07:02:24 | watcher-tls/3-disable-podlevel-tls | running command: [sh -c set -euxo pipefail\n oc exec -n ${NAMESPACE} openstackclient -- openstack service list -f value -c Name -c Type |[ $(grep -c ^watcher) == 1 ]\n SERVICEID=$(oc exec -n ${NAMESPACE} openstackclient -- openstack service list -f value -c Name -c Type -c ID | grep watcher| awk '{print $1}')\n [ $(oc get -n ${NAMESPACE} keystoneservice watcher -o jsonpath={.status.serviceID}) == ${SERVICEID} ]\n [ -n \"$(oc get -n ${NAMESPACE} watcher watcher-kuttl -o jsonpath={.status.hash.dbsync})\" ]\n \ # check that watcher internal endpoint does not use https\n oc exec -n ${NAMESPACE} openstackclient -- openstack endpoint list | grep infra-optim | grep internal | [ $(grep -c https) == 0 ]\n # check that watcher public endpoint does use https\n oc exec -n ${NAMESPACE} openstackclient -- openstack endpoint list | grep infra-optim | grep public | [ $(grep -c https) == 1 ]\n # If we are running the container locally, skip following test\n if [ \"$(oc get pods -n openstack-operators -o name -l openstack.org/operator-name=watcher)\" == \"\" ]; then\n exit 0\n fi\n env_variables=$(oc set env $(oc get pods -n openstack-operators -o name -l openstack.org/operator-name=watcher) -n openstack-operators --list)\n counter=0\n for i in ${env_variables}; do\n if echo ${i} | grep '_URL_DEFAULT' &> /dev/null; then\n echo ${i}\n counter=$((counter + 1))\n fi\n done\n if [ ${counter} -lt 3 ]; then\n echo \"Error: Less than 3 _URL_DEFAULT variables found.\"\n exit 1\n else\n echo \"Success: ${counter} _URL_DEFAULT variables found.\"\n fi\n ]\n logger.go:42: 07:02:24 | watcher-tls/3-disable-podlevel-tls | + oc exec -n watcher-kuttl-default openstackclient -- openstack service list -f value -c Name -c Type\n logger.go:42: 07:02:24 | watcher-tls/3-disable-podlevel-tls | ++ grep -c '^watcher'\n logger.go:42: 07:02:30 | watcher-tls/3-disable-podlevel-tls | + '[' 1 == 1 ']'\n logger.go:42: 07:02:30 | watcher-tls/3-disable-podlevel-tls | ++ oc exec -n watcher-kuttl-default openstackclient -- openstack service list -f value -c Name -c Type -c ID\n logger.go:42: 07:02:30 | watcher-tls/3-disable-podlevel-tls | ++ awk '{print $1}'\n logger.go:42: 07:02:30 | watcher-tls/3-disable-podlevel-tls | ++ grep watcher\n logger.go:42: 07:02:33 | watcher-tls/3-disable-podlevel-tls | + SERVICEID=95cae6c02c914be89de8a64359351912\n logger.go:42: 07:02:33 | watcher-tls/3-disable-podlevel-tls | ++ oc get -n watcher-kuttl-default keystoneservice watcher -o 'jsonpath={.status.serviceID}'\n \ logger.go:42: 07:02:33 | watcher-tls/3-disable-podlevel-tls | + '[' 95cae6c02c914be89de8a64359351912 == 95cae6c02c914be89de8a64359351912 ']'\n logger.go:42: 07:02:33 | watcher-tls/3-disable-podlevel-tls | ++ oc get -n watcher-kuttl-default watcher watcher-kuttl -o 'jsonpath={.status.hash.dbsync}'\n \ logger.go:42: 07:02:33 | watcher-tls/3-disable-podlevel-tls | + '[' -n nbbh56bh4h647h5fbh67dh699h5bdh85hb7h65fh676h555h9bh5c4hf9h684h55dh674h564h565h6fh578hd9h56bh5fchcch687h68fh568h688h69q ']'\n logger.go:42: 07:02:33 | watcher-tls/3-disable-podlevel-tls | + oc exec -n watcher-kuttl-default openstackclient -- openstack endpoint list\n logger.go:42: 07:02:33 | watcher-tls/3-disable-podlevel-tls | + grep infra-optim\n logger.go:42: 07:02:33 | watcher-tls/3-disable-podlevel-tls | + grep internal\n logger.go:42: 07:02:33 | watcher-tls/3-disable-podlevel-tls | ++ grep -c https\n logger.go:42: 07:02:36 | watcher-tls/3-disable-podlevel-tls | + '[' 0 == 0 ']'\n logger.go:42: 07:02:36 | watcher-tls/3-disable-podlevel-tls | + oc exec -n watcher-kuttl-default openstackclient -- openstack endpoint list\n logger.go:42: 07:02:36 | watcher-tls/3-disable-podlevel-tls | + grep public\n logger.go:42: 07:02:36 | watcher-tls/3-disable-podlevel-tls | + grep infra-optim\n logger.go:42: 07:02:36 | watcher-tls/3-disable-podlevel-tls | ++ grep -c https\n logger.go:42: 07:02:38 | watcher-tls/3-disable-podlevel-tls | + '[' 1 == 1 ']'\n logger.go:42: 07:02:38 | watcher-tls/3-disable-podlevel-tls | ++ oc get pods -n openstack-operators -o name -l openstack.org/operator-name=watcher\n \ logger.go:42: 07:02:38 | watcher-tls/3-disable-podlevel-tls | + '[' '' == '' ']'\n logger.go:42: 07:02:38 | watcher-tls/3-disable-podlevel-tls | + exit 0\n \ logger.go:42: 07:02:39 | watcher-tls/3-disable-podlevel-tls | running command: [sh -c set -euxo pipefail\n oc exec -n ${NAMESPACE} openstackclient -- openstack service list -f value -c Name -c Type |[ $(grep -c ^watcher) == 1 ]\n SERVICEID=$(oc exec -n ${NAMESPACE} openstackclient -- openstack service list -f value -c Name -c Type -c ID | grep watcher| awk '{print $1}')\n [ $(oc get -n ${NAMESPACE} keystoneservice watcher -o jsonpath={.status.serviceID}) == ${SERVICEID} ]\n [ -n \"$(oc get -n ${NAMESPACE} watcher watcher-kuttl -o jsonpath={.status.hash.dbsync})\" ]\n # check that watcher internal endpoint does not use https\n oc exec -n ${NAMESPACE} openstackclient -- openstack endpoint list | grep infra-optim | grep internal | [ $(grep -c https) == 0 ]\n # check that watcher public endpoint does use https\n oc exec -n ${NAMESPACE} openstackclient -- openstack endpoint list | grep infra-optim | grep public | [ $(grep -c https) == 1 ]\n # If we are running the container locally, skip following test\n if [ \"$(oc get pods -n openstack-operators -o name -l openstack.org/operator-name=watcher)\" == \"\" ]; then\n exit 0\n fi\n env_variables=$(oc set env $(oc get pods -n openstack-operators -o name -l openstack.org/operator-name=watcher) -n openstack-operators --list)\n counter=0\n for i in ${env_variables}; do\n if echo ${i} | grep '_URL_DEFAULT' &> /dev/null; then\n echo ${i}\n counter=$((counter + 1))\n fi\n done\n if [ ${counter} -lt 3 ]; then\n echo \"Error: Less than 3 _URL_DEFAULT variables found.\"\n exit 1\n else\n echo \"Success: ${counter} _URL_DEFAULT variables found.\"\n fi\n ]\n logger.go:42: 07:02:39 | watcher-tls/3-disable-podlevel-tls | + oc exec -n watcher-kuttl-default openstackclient -- openstack service list -f value -c Name -c Type\n logger.go:42: 07:02:39 | watcher-tls/3-disable-podlevel-tls | ++ grep -c '^watcher'\n logger.go:42: 07:02:41 | watcher-tls/3-disable-podlevel-tls | + '[' 1 == 1 ']'\n logger.go:42: 07:02:41 | watcher-tls/3-disable-podlevel-tls | ++ oc exec -n watcher-kuttl-default openstackclient -- openstack service list -f value -c Name -c Type -c ID\n logger.go:42: 07:02:41 | watcher-tls/3-disable-podlevel-tls | ++ awk '{print $1}'\n logger.go:42: 07:02:42 | watcher-tls/3-disable-podlevel-tls | ++ grep watcher\n logger.go:42: 07:02:44 | watcher-tls/3-disable-podlevel-tls | + SERVICEID=95cae6c02c914be89de8a64359351912\n logger.go:42: 07:02:44 | watcher-tls/3-disable-podlevel-tls | ++ oc get -n watcher-kuttl-default keystoneservice watcher -o 'jsonpath={.status.serviceID}'\n \ logger.go:42: 07:02:44 | watcher-tls/3-disable-podlevel-tls | + '[' 95cae6c02c914be89de8a64359351912 == 95cae6c02c914be89de8a64359351912 ']'\n logger.go:42: 07:02:44 | watcher-tls/3-disable-podlevel-tls | ++ oc get -n watcher-kuttl-default watcher watcher-kuttl -o 'jsonpath={.status.hash.dbsync}'\n \ logger.go:42: 07:02:44 | watcher-tls/3-disable-podlevel-tls | + '[' -n nbbh56bh4h647h5fbh67dh699h5bdh85hb7h65fh676h555h9bh5c4hf9h684h55dh674h564h565h6fh578hd9h56bh5fchcch687h68fh568h688h69q ']'\n logger.go:42: 07:02:44 | watcher-tls/3-disable-podlevel-tls | + oc exec -n watcher-kuttl-default openstackclient -- openstack endpoint list\n logger.go:42: 07:02:44 | watcher-tls/3-disable-podlevel-tls | + grep internal\n logger.go:42: 07:02:44 | watcher-tls/3-disable-podlevel-tls | + grep infra-optim\n logger.go:42: 07:02:44 | watcher-tls/3-disable-podlevel-tls | ++ grep -c https\n logger.go:42: 07:02:46 | watcher-tls/3-disable-podlevel-tls | + '[' 0 == 0 ']'\n logger.go:42: 07:02:46 | watcher-tls/3-disable-podlevel-tls | + oc exec -n watcher-kuttl-default openstackclient -- openstack endpoint list\n logger.go:42: 07:02:46 | watcher-tls/3-disable-podlevel-tls | + grep public\n logger.go:42: 07:02:46 | watcher-tls/3-disable-podlevel-tls | + grep infra-optim\n logger.go:42: 07:02:46 | watcher-tls/3-disable-podlevel-tls | ++ grep -c https\n logger.go:42: 07:02:48 | watcher-tls/3-disable-podlevel-tls | + '[' 1 == 1 ']'\n logger.go:42: 07:02:48 | watcher-tls/3-disable-podlevel-tls | ++ oc get pods -n openstack-operators -o name -l openstack.org/operator-name=watcher\n \ logger.go:42: 07:02:49 | watcher-tls/3-disable-podlevel-tls | + '[' '' == '' ']'\n logger.go:42: 07:02:49 | watcher-tls/3-disable-podlevel-tls | + exit 0\n \ logger.go:42: 07:02:49 | watcher-tls/3-disable-podlevel-tls | test step completed 3-disable-podlevel-tls\n logger.go:42: 07:02:49 | watcher-tls/4-deploy-without-route | starting test step 4-deploy-without-route\n logger.go:42: 07:02:49 | watcher-tls/4-deploy-without-route | running command: [sh -c oc patch watcher -n $NAMESPACE watcher-kuttl --type='json' -p='[{\"op\": \"replace\", \"path\": \"/spec/apiServiceTemplate/override\", \"value\":{\"service\": { \"internal\": {}, \"public\": { \"metadata\": { \"annotations\": { \"metallb.universe.tf/address-pool\": \"ctlplane\", \"metallb.universe.tf/allow-shared-ip\": \"ctlplane\" } }, \"spec\": { \"type\": \"LoadBalancer\" } } } }}]'\n ]\n logger.go:42: 07:02:49 | watcher-tls/4-deploy-without-route | watcher.watcher.openstack.org/watcher-kuttl patched\n logger.go:42: 07:02:49 | watcher-tls/4-deploy-without-route | running command: [sh -c set -euxo pipefail\n oc exec -n ${NAMESPACE} openstackclient -- openstack service list -f value -c Name -c Type |[ $(grep -c ^watcher) == 1 ]\n \ SERVICEID=$(oc exec -n ${NAMESPACE} openstackclient -- openstack service list -f value -c Name -c Type -c ID | grep watcher| awk '{print $1}')\n [ $(oc get -n ${NAMESPACE} keystoneservice watcher -o jsonpath={.status.serviceID}) == ${SERVICEID} ]\n [ -n \"$(oc get -n ${NAMESPACE} watcher watcher-kuttl -o jsonpath={.status.hash.dbsync})\" ]\n # If we are running the container locally, skip following test\n if [ \"$(oc get pods -n openstack-operators -o name -l openstack.org/operator-name=watcher)\" == \"\" ]; then\n exit 0\n fi\n env_variables=$(oc set env $(oc get pods -n openstack-operators -o name -l openstack.org/operator-name=watcher) -n openstack-operators --list)\n \ counter=0\n for i in ${env_variables}; do\n if echo ${i} | grep '_URL_DEFAULT' &> /dev/null; then\n echo ${i}\n counter=$((counter + 1))\n fi\n done\n if [ ${counter} -lt 3 ]; then\n echo \"Error: Less than 3 _URL_DEFAULT variables found.\"\n exit 1\n else\n \ echo \"Success: ${counter} _URL_DEFAULT variables found.\"\n fi\n \ ]\n logger.go:42: 07:02:49 | watcher-tls/4-deploy-without-route | + oc exec -n watcher-kuttl-default openstackclient -- openstack service list -f value -c Name -c Type\n logger.go:42: 07:02:49 | watcher-tls/4-deploy-without-route | ++ grep -c '^watcher'\n logger.go:42: 07:02:51 | watcher-tls/4-deploy-without-route | + '[' 1 == 1 ']'\n logger.go:42: 07:02:51 | watcher-tls/4-deploy-without-route | ++ oc exec -n watcher-kuttl-default openstackclient -- openstack service list -f value -c Name -c Type -c ID\n logger.go:42: 07:02:51 | watcher-tls/4-deploy-without-route | ++ awk '{print $1}'\n logger.go:42: 07:02:51 | watcher-tls/4-deploy-without-route | ++ grep watcher\n logger.go:42: 07:02:54 | watcher-tls/4-deploy-without-route | + SERVICEID=95cae6c02c914be89de8a64359351912\n logger.go:42: 07:02:54 | watcher-tls/4-deploy-without-route | ++ oc get -n watcher-kuttl-default keystoneservice watcher -o 'jsonpath={.status.serviceID}'\n \ logger.go:42: 07:02:54 | watcher-tls/4-deploy-without-route | + '[' 95cae6c02c914be89de8a64359351912 == 95cae6c02c914be89de8a64359351912 ']'\n logger.go:42: 07:02:54 | watcher-tls/4-deploy-without-route | ++ oc get -n watcher-kuttl-default watcher watcher-kuttl -o 'jsonpath={.status.hash.dbsync}'\n \ logger.go:42: 07:02:54 | watcher-tls/4-deploy-without-route | + '[' -n nbbh56bh4h647h5fbh67dh699h5bdh85hb7h65fh676h555h9bh5c4hf9h684h55dh674h564h565h6fh578hd9h56bh5fchcch687h68fh568h688h69q ']'\n logger.go:42: 07:02:54 | watcher-tls/4-deploy-without-route | ++ oc get pods -n openstack-operators -o name -l openstack.org/operator-name=watcher\n logger.go:42: 07:02:54 | watcher-tls/4-deploy-without-route | + '[' '' == '' ']'\n logger.go:42: 07:02:54 | watcher-tls/4-deploy-without-route | + exit 0\n logger.go:42: 07:02:55 | watcher-tls/4-deploy-without-route | running command: [sh -c set -euxo pipefail\n \ oc exec -n ${NAMESPACE} openstackclient -- openstack service list -f value -c Name -c Type |[ $(grep -c ^watcher) == 1 ]\n SERVICEID=$(oc exec -n ${NAMESPACE} openstackclient -- openstack service list -f value -c Name -c Type -c ID | grep watcher| awk '{print $1}')\n [ $(oc get -n ${NAMESPACE} keystoneservice watcher -o jsonpath={.status.serviceID}) == ${SERVICEID} ]\n [ -n \"$(oc get -n ${NAMESPACE} watcher watcher-kuttl -o jsonpath={.status.hash.dbsync})\" ]\n # If we are running the container locally, skip following test\n if [ \"$(oc get pods -n openstack-operators -o name -l openstack.org/operator-name=watcher)\" == \"\" ]; then\n exit 0\n fi\n env_variables=$(oc set env $(oc get pods -n openstack-operators -o name -l openstack.org/operator-name=watcher) -n openstack-operators --list)\n counter=0\n for i in ${env_variables}; do\n if echo ${i} | grep '_URL_DEFAULT' &> /dev/null; then\n echo ${i}\n counter=$((counter + 1))\n fi\n done\n if [ ${counter} -lt 3 ]; then\n echo \"Error: Less than 3 _URL_DEFAULT variables found.\"\n exit 1\n else\n echo \"Success: ${counter} _URL_DEFAULT variables found.\"\n fi\n ]\n logger.go:42: 07:02:55 | watcher-tls/4-deploy-without-route | + oc exec -n watcher-kuttl-default openstackclient -- openstack service list -f value -c Name -c Type\n logger.go:42: 07:02:55 | watcher-tls/4-deploy-without-route | ++ grep -c '^watcher'\n logger.go:42: 07:02:57 | watcher-tls/4-deploy-without-route | + '[' 1 == 1 ']'\n logger.go:42: 07:02:57 | watcher-tls/4-deploy-without-route | ++ oc exec -n watcher-kuttl-default openstackclient -- openstack service list -f value -c Name -c Type -c ID\n logger.go:42: 07:02:57 | watcher-tls/4-deploy-without-route | ++ awk '{print $1}'\n logger.go:42: 07:02:57 | watcher-tls/4-deploy-without-route | ++ grep watcher\n logger.go:42: 07:03:00 | watcher-tls/4-deploy-without-route | + SERVICEID=95cae6c02c914be89de8a64359351912\n logger.go:42: 07:03:00 | watcher-tls/4-deploy-without-route | ++ oc get -n watcher-kuttl-default keystoneservice watcher -o 'jsonpath={.status.serviceID}'\n \ logger.go:42: 07:03:00 | watcher-tls/4-deploy-without-route | + '[' 95cae6c02c914be89de8a64359351912 == 95cae6c02c914be89de8a64359351912 ']'\n logger.go:42: 07:03:00 | watcher-tls/4-deploy-without-route | ++ oc get -n watcher-kuttl-default watcher watcher-kuttl -o 'jsonpath={.status.hash.dbsync}'\n \ logger.go:42: 07:03:00 | watcher-tls/4-deploy-without-route | + '[' -n nbbh56bh4h647h5fbh67dh699h5bdh85hb7h65fh676h555h9bh5c4hf9h684h55dh674h564h565h6fh578hd9h56bh5fchcch687h68fh568h688h69q ']'\n logger.go:42: 07:03:00 | watcher-tls/4-deploy-without-route | ++ oc get pods -n openstack-operators -o name -l openstack.org/operator-name=watcher\n logger.go:42: 07:03:00 | watcher-tls/4-deploy-without-route | + '[' '' == '' ']'\n logger.go:42: 07:03:00 | watcher-tls/4-deploy-without-route | + exit 0\n logger.go:42: 07:03:00 | watcher-tls/4-deploy-without-route | test step completed 4-deploy-without-route\n \ logger.go:42: 07:03:00 | watcher-tls/5-disable-tls | starting test step 5-disable-tls\n \ logger.go:42: 07:03:00 | watcher-tls/5-disable-tls | running command: [sh -c oc patch watcher -n $NAMESPACE watcher-kuttl --type='json' -p='[{\"op\": \"replace\", \"path\": \"/spec/apiServiceTemplate/override\", \"value\":{}}]'\n ]\n logger.go:42: 07:03:00 | watcher-tls/5-disable-tls | watcher.watcher.openstack.org/watcher-kuttl patched\n logger.go:42: 07:03:00 | watcher-tls/5-disable-tls | running command: [sh -c set -euxo pipefail\n oc exec -n ${NAMESPACE} openstackclient -- openstack service list -f value -c Name -c Type |[ $(grep -c ^watcher) == 1 ]\n SERVICEID=$(oc exec -n ${NAMESPACE} openstackclient -- openstack service list -f value -c Name -c Type -c ID | grep watcher| awk '{print $1}')\n [ $(oc get -n ${NAMESPACE} keystoneservice watcher -o jsonpath={.status.serviceID}) == ${SERVICEID} ]\n [ -n \"$(oc get -n ${NAMESPACE} watcher watcher-kuttl -o jsonpath={.status.hash.dbsync})\" ]\n # If we are running the container locally, skip following test\n if [ \"$(oc get pods -n openstack-operators -o name -l openstack.org/operator-name=watcher)\" == \"\" ]; then\n exit 0\n fi\n env_variables=$(oc set env $(oc get pods -n openstack-operators -o name -l openstack.org/operator-name=watcher) -n openstack-operators --list)\n counter=0\n for i in ${env_variables}; do\n if echo ${i} | grep '_URL_DEFAULT' &> /dev/null; then\n echo ${i}\n counter=$((counter + 1))\n fi\n done\n if [ ${counter} -lt 3 ]; then\n echo \"Error: Less than 3 _URL_DEFAULT variables found.\"\n exit 1\n else\n echo \"Success: ${counter} _URL_DEFAULT variables found.\"\n fi\n # check that no watcher endpoint uses https\n \ oc exec -n ${NAMESPACE} openstackclient -- openstack endpoint list | grep infra-optim | [ $(grep -c https) == 0 ]\n ]\n logger.go:42: 07:03:00 | watcher-tls/5-disable-tls | + oc exec -n watcher-kuttl-default openstackclient -- openstack service list -f value -c Name -c Type\n logger.go:42: 07:03:00 | watcher-tls/5-disable-tls | ++ grep -c '^watcher'\n logger.go:42: 07:03:03 | watcher-tls/5-disable-tls | + '[' 1 == 1 ']'\n logger.go:42: 07:03:03 | watcher-tls/5-disable-tls | ++ oc exec -n watcher-kuttl-default openstackclient -- openstack service list -f value -c Name -c Type -c ID\n logger.go:42: 07:03:03 | watcher-tls/5-disable-tls | ++ awk '{print $1}'\n logger.go:42: 07:03:03 | watcher-tls/5-disable-tls | ++ grep watcher\n logger.go:42: 07:03:05 | watcher-tls/5-disable-tls | + SERVICEID=95cae6c02c914be89de8a64359351912\n \ logger.go:42: 07:03:05 | watcher-tls/5-disable-tls | ++ oc get -n watcher-kuttl-default keystoneservice watcher -o 'jsonpath={.status.serviceID}'\n logger.go:42: 07:03:05 | watcher-tls/5-disable-tls | + '[' 95cae6c02c914be89de8a64359351912 == 95cae6c02c914be89de8a64359351912 ']'\n logger.go:42: 07:03:05 | watcher-tls/5-disable-tls | ++ oc get -n watcher-kuttl-default watcher watcher-kuttl -o 'jsonpath={.status.hash.dbsync}'\n logger.go:42: 07:03:06 | watcher-tls/5-disable-tls | + '[' -n nbbh56bh4h647h5fbh67dh699h5bdh85hb7h65fh676h555h9bh5c4hf9h684h55dh674h564h565h6fh578hd9h56bh5fchcch687h68fh568h688h69q ']'\n logger.go:42: 07:03:06 | watcher-tls/5-disable-tls | ++ oc get pods -n openstack-operators -o name -l openstack.org/operator-name=watcher\n logger.go:42: 07:03:06 | watcher-tls/5-disable-tls | + '[' '' == '' ']'\n logger.go:42: 07:03:06 | watcher-tls/5-disable-tls | + exit 0\n logger.go:42: 07:03:07 | watcher-tls/5-disable-tls | running command: [sh -c set -euxo pipefail\n oc exec -n ${NAMESPACE} openstackclient -- openstack service list -f value -c Name -c Type |[ $(grep -c ^watcher) == 1 ]\n \ SERVICEID=$(oc exec -n ${NAMESPACE} openstackclient -- openstack service list -f value -c Name -c Type -c ID | grep watcher| awk '{print $1}')\n [ $(oc get -n ${NAMESPACE} keystoneservice watcher -o jsonpath={.status.serviceID}) == ${SERVICEID} ]\n [ -n \"$(oc get -n ${NAMESPACE} watcher watcher-kuttl -o jsonpath={.status.hash.dbsync})\" ]\n # If we are running the container locally, skip following test\n if [ \"$(oc get pods -n openstack-operators -o name -l openstack.org/operator-name=watcher)\" == \"\" ]; then\n exit 0\n fi\n env_variables=$(oc set env $(oc get pods -n openstack-operators -o name -l openstack.org/operator-name=watcher) -n openstack-operators --list)\n \ counter=0\n for i in ${env_variables}; do\n if echo ${i} | grep '_URL_DEFAULT' &> /dev/null; then\n echo ${i}\n counter=$((counter + 1))\n fi\n done\n if [ ${counter} -lt 3 ]; then\n echo \"Error: Less than 3 _URL_DEFAULT variables found.\"\n exit 1\n else\n \ echo \"Success: ${counter} _URL_DEFAULT variables found.\"\n fi\n \ # check that no watcher endpoint uses https\n oc exec -n ${NAMESPACE} openstackclient -- openstack endpoint list | grep infra-optim | [ $(grep -c https) == 0 ]\n ]\n logger.go:42: 07:03:07 | watcher-tls/5-disable-tls | + oc exec -n watcher-kuttl-default openstackclient -- openstack service list -f value -c Name -c Type\n logger.go:42: 07:03:07 | watcher-tls/5-disable-tls | ++ grep -c '^watcher'\n logger.go:42: 07:03:09 | watcher-tls/5-disable-tls | + '[' 1 == 1 ']'\n logger.go:42: 07:03:09 | watcher-tls/5-disable-tls | ++ grep watcher\n \ logger.go:42: 07:03:09 | watcher-tls/5-disable-tls | ++ oc exec -n watcher-kuttl-default openstackclient -- openstack service list -f value -c Name -c Type -c ID\n logger.go:42: 07:03:09 | watcher-tls/5-disable-tls | ++ awk '{print $1}'\n logger.go:42: 07:03:11 | watcher-tls/5-disable-tls | + SERVICEID=95cae6c02c914be89de8a64359351912\n logger.go:42: 07:03:11 | watcher-tls/5-disable-tls | ++ oc get -n watcher-kuttl-default keystoneservice watcher -o 'jsonpath={.status.serviceID}'\n logger.go:42: 07:03:11 | watcher-tls/5-disable-tls | + '[' 95cae6c02c914be89de8a64359351912 == 95cae6c02c914be89de8a64359351912 ']'\n \ logger.go:42: 07:03:11 | watcher-tls/5-disable-tls | ++ oc get -n watcher-kuttl-default watcher watcher-kuttl -o 'jsonpath={.status.hash.dbsync}'\n logger.go:42: 07:03:12 | watcher-tls/5-disable-tls | + '[' -n nbbh56bh4h647h5fbh67dh699h5bdh85hb7h65fh676h555h9bh5c4hf9h684h55dh674h564h565h6fh578hd9h56bh5fchcch687h68fh568h688h69q ']'\n logger.go:42: 07:03:12 | watcher-tls/5-disable-tls | ++ oc get pods -n openstack-operators -o name -l openstack.org/operator-name=watcher\n logger.go:42: 07:03:12 | watcher-tls/5-disable-tls | + '[' '' == '' ']'\n logger.go:42: 07:03:12 | watcher-tls/5-disable-tls | + exit 0\n logger.go:42: 07:03:12 | watcher-tls/5-disable-tls | test step completed 5-disable-tls\n logger.go:42: 07:03:12 | watcher-tls/6-cleanup-watcher | starting test step 6-cleanup-watcher\n logger.go:42: 07:03:12 | watcher-tls/6-cleanup-watcher | running command: [sh -c set -ex\n oc exec -n watcher-kuttl-default openstackclient -- openstack service list -f value -c Name -c Type | [ $(grep -c ^watcher) == 0 ]\n ]\n logger.go:42: 07:03:12 | watcher-tls/6-cleanup-watcher | + oc exec -n watcher-kuttl-default openstackclient -- openstack service list -f value -c Name -c Type\n logger.go:42: 07:03:12 | watcher-tls/6-cleanup-watcher | ++ grep -c '^watcher'\n logger.go:42: 07:03:15 | watcher-tls/6-cleanup-watcher | + '[' 1 == 0 ']'\n logger.go:42: 07:03:16 | watcher-tls/6-cleanup-watcher | running command: [sh -c set -ex\n oc exec -n watcher-kuttl-default openstackclient -- openstack service list -f value -c Name -c Type | [ $(grep -c ^watcher) == 0 ]\n ]\n logger.go:42: 07:03:16 | watcher-tls/6-cleanup-watcher | + oc exec -n watcher-kuttl-default openstackclient -- openstack service list -f value -c Name -c Type\n logger.go:42: 07:03:16 | watcher-tls/6-cleanup-watcher | ++ grep -c '^watcher'\n logger.go:42: 07:03:19 | watcher-tls/6-cleanup-watcher | + '[' 0 == 0 ']'\n logger.go:42: 07:03:20 | watcher-tls/6-cleanup-watcher | running command: [sh -c set -ex\n oc exec -n watcher-kuttl-default openstackclient -- openstack service list -f value -c Name -c Type | [ $(grep -c ^watcher) == 0 ]\n ]\n logger.go:42: 07:03:20 | watcher-tls/6-cleanup-watcher | + oc exec -n watcher-kuttl-default openstackclient -- openstack service list -f value -c Name -c Type\n logger.go:42: 07:03:20 | watcher-tls/6-cleanup-watcher | ++ grep -c '^watcher'\n logger.go:42: 07:03:23 | watcher-tls/6-cleanup-watcher | + '[' 0 == 0 ']'\n logger.go:42: 07:03:24 | watcher-tls/6-cleanup-watcher | running command: [sh -c set -ex\n oc exec -n watcher-kuttl-default openstackclient -- openstack service list -f value -c Name -c Type | [ $(grep -c ^watcher) == 0 ]\n ]\n logger.go:42: 07:03:24 | watcher-tls/6-cleanup-watcher | + oc exec -n watcher-kuttl-default openstackclient -- openstack service list -f value -c Name -c Type\n logger.go:42: 07:03:24 | watcher-tls/6-cleanup-watcher | ++ grep -c '^watcher'\n logger.go:42: 07:03:26 | watcher-tls/6-cleanup-watcher | + '[' 0 == 0 ']'\n logger.go:42: 07:03:26 | watcher-tls/6-cleanup-watcher | test step completed 6-cleanup-watcher\n logger.go:42: 07:03:26 | watcher-tls/7-cleanup-certs | starting test step 7-cleanup-certs\n logger.go:42: 07:03:26 | watcher-tls/7-cleanup-certs | test step completed 7-cleanup-certs\n logger.go:42: 07:03:26 | watcher-tls | skipping kubernetes event logging\n=== CONT kuttl/harness/watcher-rmquser\n logger.go:42: 07:03:26 | watcher-rmquser | Skipping creation of user-supplied namespace: watcher-kuttl-default\n \ logger.go:42: 07:03:26 | watcher-rmquser/0-cleanup-watcher | starting test step 0-cleanup-watcher\n logger.go:42: 07:03:26 | watcher-rmquser/0-cleanup-watcher | test step completed 0-cleanup-watcher\n logger.go:42: 07:03:26 | watcher-rmquser/1-deploy | starting test step 1-deploy\n logger.go:42: 07:03:26 | watcher-rmquser/1-deploy | Watcher:watcher-kuttl-default/watcher-kuttl created\n logger.go:42: 07:03:26 | watcher-rmquser/1-deploy | running command: [sh -c set -euxo pipefail\n \n \ # Wait for Watcher to be Ready\n kubectl wait --for=condition=Ready watcher/watcher-kuttl -n $NAMESPACE --timeout=300s\n \n # Verify WatcherNotificationTransportURLReady condition exists and is True\n kubectl get watcher watcher-kuttl -n $NAMESPACE -o jsonpath='{.status.conditions[?(@.type==\"WatcherNotificationTransportURLReady\")].status}' | grep -q \"True\"\n echo \"WatcherNotificationTransportURLReady condition is True\"\n \n # Count TransportURL CRs - should be exactly 2 (one for messaging, one for notifications)\n transport_count=$(kubectl get transporturl -n $NAMESPACE -o name | grep \"watcher-kuttl-watcher-transport\" | wc -l)\n notification_transport_count=$(kubectl get transporturl -n $NAMESPACE -o name | grep \"watcher-kuttl-watcher-notification\" | wc -l)\n \n if [ \"$transport_count\" -ne \"1\" ]; then\n echo \"Expected 1 watcher-transport TransportURL, found $transport_count\"\n exit 1\n fi\n \n if [ \"$notification_transport_count\" -ne \"1\" ]; then\n echo \"Expected 1 notification-transport TransportURL, found $notification_transport_count\"\n exit 1\n fi\n \n echo \"Correctly found 2 TransportURLs (separate clusters: transport and notification)\"\n \ \n # Verify watcher-transport has correct user and vhost\n transport_user=$(kubectl get transporturl watcher-kuttl-watcher-transport -n $NAMESPACE -o jsonpath='{.spec.username}')\n \ transport_vhost=$(kubectl get transporturl watcher-kuttl-watcher-transport -n $NAMESPACE -o jsonpath='{.spec.vhost}')\n if [ \"$transport_user\" != \"watcher-rpc\" ]; then\n echo \"Expected watcher-transport username 'watcher-rpc', found '$transport_user'\"\n exit 1\n fi\n if [ \"$transport_vhost\" != \"watcher-rpc\" ]; then\n echo \"Expected watcher-transport vhost 'watcher-rpc', found '$transport_vhost'\"\n exit 1\n fi\n echo \"Watcher transport has correct user (watcher-rpc) and vhost (watcher-rpc)\"\n \n # Verify notification-transport has correct user and vhost\n notif_user=$(kubectl get transporturl watcher-kuttl-watcher-notification-rabbitmq-notifications -n $NAMESPACE -o jsonpath='{.spec.username}')\n notif_vhost=$(kubectl get transporturl watcher-kuttl-watcher-notification-rabbitmq-notifications -n $NAMESPACE -o jsonpath='{.spec.vhost}')\n \ if [ \"$notif_user\" != \"watcher-notifications\" ]; then\n echo \"Expected notification-transport username 'watcher-notifications', found '$notif_user'\"\n \ exit 1\n fi\n if [ \"$notif_vhost\" != \"watcher-notifications\" ]; then\n echo \"Expected notification-transport vhost 'watcher-notifications', found '$notif_vhost'\"\n exit 1\n fi\n echo \"Notification transport has correct user (watcher-notifications) and vhost (watcher-notifications)\"\n \ \n # Verify that watcher.conf contains the notifications transport_url\n \ WATCHER_API_POD=$(kubectl get pods -n $NAMESPACE -l \"service=watcher-api\" -o custom-columns=:metadata.name --no-headers | grep -v ^$ | head -1)\n if [ -z \"${WATCHER_API_POD}\" ]; then\n echo \"No watcher-api pod found\"\n \ exit 1\n fi\n # Verify RPC transport_url in DEFAULT section\n \ rpc_transport_url=$(kubectl exec -n $NAMESPACE ${WATCHER_API_POD} -c watcher-api -- cat /etc/watcher/watcher.conf.d/00-default.conf | grep -E '^\\[DEFAULT\\]' -A 50 | grep 'transport_url' | head -1 || true)\n if [ -z \"$rpc_transport_url\" ]; then\n echo \"transport_url not found in DEFAULT section\"\n exit 1\n fi\n echo \"Found RPC transport_url: $rpc_transport_url\"\n \n \ # Verify the RPC transport_url contains the correct vhost (watcher-rpc)\n \ if ! echo \"$rpc_transport_url\" | grep -q '/watcher-rpc'; then\n echo \"RPC transport_url does not contain expected vhost '/watcher-rpc'\"\n exit 1\n fi\n echo \"Successfully verified vhost 'watcher-rpc' in RPC transport_url\"\n \ \n # Verify the RPC transport_url contains the correct username (watcher-rpc)\n \ if ! echo \"$rpc_transport_url\" | grep -q 'watcher-rpc:'; then\n echo \"RPC transport_url does not contain expected username 'watcher-rpc:'\"\n exit 1\n fi\n echo \"Successfully verified username 'watcher-rpc' in RPC transport_url\"\n \n # Verify oslo_messaging_notifications section has transport_url configured\n notif_transport_url=$(kubectl exec -n $NAMESPACE ${WATCHER_API_POD} -c watcher-api -- cat /etc/watcher/watcher.conf.d/00-default.conf | grep -A 5 '\\[oslo_messaging_notifications\\]' | grep 'transport_url' || true)\n \ if [ -z \"$notif_transport_url\" ]; then\n echo \"transport_url not found in oslo_messaging_notifications section\"\n exit 1\n fi\n \ echo \"Found notifications transport_url: $notif_transport_url\"\n \n \ # Verify the notifications transport_url contains the correct vhost (watcher-notifications)\n \ if ! echo \"$notif_transport_url\" | grep -q '/watcher-notifications'; then\n \ echo \"Notifications transport_url does not contain expected vhost '/watcher-notifications'\"\n \ exit 1\n fi\n echo \"Successfully verified vhost 'watcher-notifications' in notifications transport_url\"\n \n # Verify the notifications transport_url contains the correct username (watcher-notifications)\n if ! echo \"$notif_transport_url\" | grep -q 'watcher-notifications:'; then\n echo \"Notifications transport_url does not contain expected username 'watcher-notifications:'\"\n exit 1\n \ fi\n echo \"Successfully verified username 'watcher-notifications' in notifications transport_url\"\n \n exit 0\n ]\n logger.go:42: 07:03:26 | watcher-rmquser/1-deploy | + kubectl wait --for=condition=Ready watcher/watcher-kuttl -n watcher-kuttl-default --timeout=300s\n logger.go:42: 07:08:26 | watcher-rmquser/1-deploy | error: timed out waiting for the condition on watchers/watcher-kuttl\n logger.go:42: 07:08:26 | watcher-rmquser/1-deploy | test step failed 1-deploy\n case.go:396: failed in step 1-deploy\n case.go:398: transporturls.rabbitmq.openstack.org \"watcher-kuttl-watcher-transport\" not found\n case.go:398: transporturls.rabbitmq.openstack.org \"watcher-kuttl-watcher-notification-rabbitmq-notifications\" not found\n case.go:398: command \"kubectl wait --for=condition=Ready watcher/watcher-kuttl -n $NAMESP...\" exceeded 300 sec timeout, context deadline exceeded\n logger.go:42: 07:08:26 | watcher-rmquser | skipping kubernetes event logging\n=== CONT kuttl/harness/watcher-api-scaling\n \ logger.go:42: 07:08:26 | watcher-api-scaling | Skipping creation of user-supplied namespace: watcher-kuttl-default\n logger.go:42: 07:08:26 | watcher-api-scaling/0-cleanup-watcher | starting test step 0-cleanup-watcher\n logger.go:42: 07:08:27 | watcher-api-scaling/0-cleanup-watcher | test step completed 0-cleanup-watcher\n logger.go:42: 07:08:27 | watcher-api-scaling/1-deploy-with-defaults | starting test step 1-deploy-with-defaults\n logger.go:42: 07:08:27 | watcher-api-scaling/1-deploy-with-defaults | Watcher:watcher-kuttl-default/watcher-kuttl created\n logger.go:42: 07:08:27 | watcher-api-scaling/1-deploy-with-defaults | running command: [sh -c set -euxo pipefail\n oc exec -n watcher-kuttl-default openstackclient -- openstack service list -f value -c Name -c Type |[ $(grep -c ^watcher) == 1 ]\n SERVICEID=$(oc exec -n watcher-kuttl-default openstackclient -- openstack service list -f value -c Name -c Type -c ID | grep watcher| awk '{print $1}')\n [ $(oc get -n watcher-kuttl-default keystoneservice watcher -o jsonpath={.status.serviceID}) == ${SERVICEID} ]\n [ -n \"$(oc get -n watcher-kuttl-default watcher watcher-kuttl -o jsonpath={.status.hash.dbsync})\" ]\n # If we are running the container locally, skip following test\n if [ \"$(oc get pods -n openstack-operators -o name -l openstack.org/operator-name=watcher)\" == \"\" ]; then\n exit 0\n fi\n env_variables=$(oc set env $(oc get pods -n openstack-operators -o name -l openstack.org/operator-name=watcher) -n openstack-operators --list)\n counter=0\n for i in ${env_variables}; do\n if echo ${i} | grep '_URL_DEFAULT' &> /dev/null; then\n echo ${i}\n counter=$((counter + 1))\n fi\n done\n if [ ${counter} -lt 3 ]; then\n echo \"Error: Less than 3 _URL_DEFAULT variables found.\"\n exit 1\n else\n echo \"Success: ${counter} _URL_DEFAULT variables found.\"\n fi\n ]\n logger.go:42: 07:08:27 | watcher-api-scaling/1-deploy-with-defaults | + oc exec -n watcher-kuttl-default openstackclient -- openstack service list -f value -c Name -c Type\n logger.go:42: 07:08:27 | watcher-api-scaling/1-deploy-with-defaults | ++ grep -c '^watcher'\n logger.go:42: 07:08:29 | watcher-api-scaling/1-deploy-with-defaults | + '[' 0 == 1 ']'\n logger.go:42: 07:08:30 | watcher-api-scaling/1-deploy-with-defaults | running command: [sh -c set -euxo pipefail\n oc exec -n watcher-kuttl-default openstackclient -- openstack service list -f value -c Name -c Type |[ $(grep -c ^watcher) == 1 ]\n SERVICEID=$(oc exec -n watcher-kuttl-default openstackclient -- openstack service list -f value -c Name -c Type -c ID | grep watcher| awk '{print $1}')\n [ $(oc get -n watcher-kuttl-default keystoneservice watcher -o jsonpath={.status.serviceID}) == ${SERVICEID} ]\n [ -n \"$(oc get -n watcher-kuttl-default watcher watcher-kuttl -o jsonpath={.status.hash.dbsync})\" ]\n # If we are running the container locally, skip following test\n if [ \"$(oc get pods -n openstack-operators -o name -l openstack.org/operator-name=watcher)\" == \"\" ]; then\n exit 0\n fi\n env_variables=$(oc set env $(oc get pods -n openstack-operators -o name -l openstack.org/operator-name=watcher) -n openstack-operators --list)\n \ counter=0\n for i in ${env_variables}; do\n if echo ${i} | grep '_URL_DEFAULT' &> /dev/null; then\n echo ${i}\n counter=$((counter + 1))\n fi\n done\n if [ ${counter} -lt 3 ]; then\n echo \"Error: Less than 3 _URL_DEFAULT variables found.\"\n exit 1\n else\n \ echo \"Success: ${counter} _URL_DEFAULT variables found.\"\n fi\n \ ]\n logger.go:42: 07:08:30 | watcher-api-scaling/1-deploy-with-defaults | + oc exec -n watcher-kuttl-default openstackclient -- openstack service list -f value -c Name -c Type\n logger.go:42: 07:08:30 | watcher-api-scaling/1-deploy-with-defaults | ++ grep -c '^watcher'\n logger.go:42: 07:08:34 | watcher-api-scaling/1-deploy-with-defaults | + '[' 0 == 1 ']'\n logger.go:42: 07:08:35 | watcher-api-scaling/1-deploy-with-defaults | running command: [sh -c set -euxo pipefail\n oc exec -n watcher-kuttl-default openstackclient -- openstack service list -f value -c Name -c Type |[ $(grep -c ^watcher) == 1 ]\n SERVICEID=$(oc exec -n watcher-kuttl-default openstackclient -- openstack service list -f value -c Name -c Type -c ID | grep watcher| awk '{print $1}')\n [ $(oc get -n watcher-kuttl-default keystoneservice watcher -o jsonpath={.status.serviceID}) == ${SERVICEID} ]\n [ -n \"$(oc get -n watcher-kuttl-default watcher watcher-kuttl -o jsonpath={.status.hash.dbsync})\" ]\n # If we are running the container locally, skip following test\n if [ \"$(oc get pods -n openstack-operators -o name -l openstack.org/operator-name=watcher)\" == \"\" ]; then\n exit 0\n fi\n env_variables=$(oc set env $(oc get pods -n openstack-operators -o name -l openstack.org/operator-name=watcher) -n openstack-operators --list)\n \ counter=0\n for i in ${env_variables}; do\n if echo ${i} | grep '_URL_DEFAULT' &> /dev/null; then\n echo ${i}\n counter=$((counter + 1))\n fi\n done\n if [ ${counter} -lt 3 ]; then\n echo \"Error: Less than 3 _URL_DEFAULT variables found.\"\n exit 1\n else\n \ echo \"Success: ${counter} _URL_DEFAULT variables found.\"\n fi\n \ ]\n logger.go:42: 07:08:35 | watcher-api-scaling/1-deploy-with-defaults | + oc exec -n watcher-kuttl-default openstackclient -- openstack service list -f value -c Name -c Type\n logger.go:42: 07:08:35 | watcher-api-scaling/1-deploy-with-defaults | ++ grep -c '^watcher'\n logger.go:42: 07:08:37 | watcher-api-scaling/1-deploy-with-defaults | + '[' 0 == 1 ']'\n logger.go:42: 07:08:38 | watcher-api-scaling/1-deploy-with-defaults | running command: [sh -c set -euxo pipefail\n oc exec -n watcher-kuttl-default openstackclient -- openstack service list -f value -c Name -c Type |[ $(grep -c ^watcher) == 1 ]\n SERVICEID=$(oc exec -n watcher-kuttl-default openstackclient -- openstack service list -f value -c Name -c Type -c ID | grep watcher| awk '{print $1}')\n [ $(oc get -n watcher-kuttl-default keystoneservice watcher -o jsonpath={.status.serviceID}) == ${SERVICEID} ]\n [ -n \"$(oc get -n watcher-kuttl-default watcher watcher-kuttl -o jsonpath={.status.hash.dbsync})\" ]\n # If we are running the container locally, skip following test\n if [ \"$(oc get pods -n openstack-operators -o name -l openstack.org/operator-name=watcher)\" == \"\" ]; then\n exit 0\n fi\n env_variables=$(oc set env $(oc get pods -n openstack-operators -o name -l openstack.org/operator-name=watcher) -n openstack-operators --list)\n \ counter=0\n for i in ${env_variables}; do\n if echo ${i} | grep '_URL_DEFAULT' &> /dev/null; then\n echo ${i}\n counter=$((counter + 1))\n fi\n done\n if [ ${counter} -lt 3 ]; then\n echo \"Error: Less than 3 _URL_DEFAULT variables found.\"\n exit 1\n else\n \ echo \"Success: ${counter} _URL_DEFAULT variables found.\"\n fi\n \ ]\n logger.go:42: 07:08:38 | watcher-api-scaling/1-deploy-with-defaults | + oc exec -n watcher-kuttl-default openstackclient -- openstack service list -f value -c Name -c Type\n logger.go:42: 07:08:38 | watcher-api-scaling/1-deploy-with-defaults | ++ grep -c '^watcher'\n logger.go:42: 07:08:41 | watcher-api-scaling/1-deploy-with-defaults | + '[' 1 == 1 ']'\n logger.go:42: 07:08:41 | watcher-api-scaling/1-deploy-with-defaults | ++ grep watcher\n logger.go:42: 07:08:41 | watcher-api-scaling/1-deploy-with-defaults | ++ awk '{print $1}'\n logger.go:42: 07:08:41 | watcher-api-scaling/1-deploy-with-defaults | ++ oc exec -n watcher-kuttl-default openstackclient -- openstack service list -f value -c Name -c Type -c ID\n logger.go:42: 07:08:43 | watcher-api-scaling/1-deploy-with-defaults | + SERVICEID=7bb23bb8113c4d3cac4445d032b0decb\n logger.go:42: 07:08:43 | watcher-api-scaling/1-deploy-with-defaults | ++ oc get -n watcher-kuttl-default keystoneservice watcher -o 'jsonpath={.status.serviceID}'\n \ logger.go:42: 07:08:43 | watcher-api-scaling/1-deploy-with-defaults | + '[' 7bb23bb8113c4d3cac4445d032b0decb == 7bb23bb8113c4d3cac4445d032b0decb ']'\n logger.go:42: 07:08:43 | watcher-api-scaling/1-deploy-with-defaults | ++ oc get -n watcher-kuttl-default watcher watcher-kuttl -o 'jsonpath={.status.hash.dbsync}'\n logger.go:42: 07:08:43 | watcher-api-scaling/1-deploy-with-defaults | + '[' -n '' ']'\n logger.go:42: 07:08:44 | watcher-api-scaling/1-deploy-with-defaults | running command: [sh -c set -euxo pipefail\n oc exec -n watcher-kuttl-default openstackclient -- openstack service list -f value -c Name -c Type |[ $(grep -c ^watcher) == 1 ]\n \ SERVICEID=$(oc exec -n watcher-kuttl-default openstackclient -- openstack service list -f value -c Name -c Type -c ID | grep watcher| awk '{print $1}')\n \ [ $(oc get -n watcher-kuttl-default keystoneservice watcher -o jsonpath={.status.serviceID}) == ${SERVICEID} ]\n [ -n \"$(oc get -n watcher-kuttl-default watcher watcher-kuttl -o jsonpath={.status.hash.dbsync})\" ]\n # If we are running the container locally, skip following test\n if [ \"$(oc get pods -n openstack-operators -o name -l openstack.org/operator-name=watcher)\" == \"\" ]; then\n exit 0\n fi\n env_variables=$(oc set env $(oc get pods -n openstack-operators -o name -l openstack.org/operator-name=watcher) -n openstack-operators --list)\n \ counter=0\n for i in ${env_variables}; do\n if echo ${i} | grep '_URL_DEFAULT' &> /dev/null; then\n echo ${i}\n counter=$((counter + 1))\n fi\n done\n if [ ${counter} -lt 3 ]; then\n echo \"Error: Less than 3 _URL_DEFAULT variables found.\"\n exit 1\n else\n \ echo \"Success: ${counter} _URL_DEFAULT variables found.\"\n fi\n \ ]\n logger.go:42: 07:08:44 | watcher-api-scaling/1-deploy-with-defaults | + oc exec -n watcher-kuttl-default openstackclient -- openstack service list -f value -c Name -c Type\n logger.go:42: 07:08:44 | watcher-api-scaling/1-deploy-with-defaults | ++ grep -c '^watcher'\n logger.go:42: 07:08:48 | watcher-api-scaling/1-deploy-with-defaults | + '[' 1 == 1 ']'\n logger.go:42: 07:08:48 | watcher-api-scaling/1-deploy-with-defaults | ++ oc exec -n watcher-kuttl-default openstackclient -- openstack service list -f value -c Name -c Type -c ID\n logger.go:42: 07:08:48 | watcher-api-scaling/1-deploy-with-defaults | ++ awk '{print $1}'\n logger.go:42: 07:08:48 | watcher-api-scaling/1-deploy-with-defaults | ++ grep watcher\n logger.go:42: 07:08:50 | watcher-api-scaling/1-deploy-with-defaults | + SERVICEID=7bb23bb8113c4d3cac4445d032b0decb\n logger.go:42: 07:08:50 | watcher-api-scaling/1-deploy-with-defaults | ++ oc get -n watcher-kuttl-default keystoneservice watcher -o 'jsonpath={.status.serviceID}'\n \ logger.go:42: 07:08:50 | watcher-api-scaling/1-deploy-with-defaults | + '[' 7bb23bb8113c4d3cac4445d032b0decb == 7bb23bb8113c4d3cac4445d032b0decb ']'\n logger.go:42: 07:08:50 | watcher-api-scaling/1-deploy-with-defaults | ++ oc get -n watcher-kuttl-default watcher watcher-kuttl -o 'jsonpath={.status.hash.dbsync}'\n logger.go:42: 07:08:50 | watcher-api-scaling/1-deploy-with-defaults | + '[' -n nbbh56bh4h647h5fbh67dh699h5bdh85hb7h65fh676h555h9bh5c4hf9h684h55dh674h564h565h6fh578hd9h56bh5fchcch687h68fh568h688h69q ']'\n logger.go:42: 07:08:50 | watcher-api-scaling/1-deploy-with-defaults | ++ oc get pods -n openstack-operators -o name -l openstack.org/operator-name=watcher\n \ logger.go:42: 07:08:51 | watcher-api-scaling/1-deploy-with-defaults | + '[' '' == '' ']'\n logger.go:42: 07:08:51 | watcher-api-scaling/1-deploy-with-defaults | + exit 0\n logger.go:42: 07:08:52 | watcher-api-scaling/1-deploy-with-defaults | running command: [sh -c set -euxo pipefail\n oc exec -n watcher-kuttl-default openstackclient -- openstack service list -f value -c Name -c Type |[ $(grep -c ^watcher) == 1 ]\n SERVICEID=$(oc exec -n watcher-kuttl-default openstackclient -- openstack service list -f value -c Name -c Type -c ID | grep watcher| awk '{print $1}')\n [ $(oc get -n watcher-kuttl-default keystoneservice watcher -o jsonpath={.status.serviceID}) == ${SERVICEID} ]\n [ -n \"$(oc get -n watcher-kuttl-default watcher watcher-kuttl -o jsonpath={.status.hash.dbsync})\" ]\n # If we are running the container locally, skip following test\n if [ \"$(oc get pods -n openstack-operators -o name -l openstack.org/operator-name=watcher)\" == \"\" ]; then\n exit 0\n fi\n env_variables=$(oc set env $(oc get pods -n openstack-operators -o name -l openstack.org/operator-name=watcher) -n openstack-operators --list)\n \ counter=0\n for i in ${env_variables}; do\n if echo ${i} | grep '_URL_DEFAULT' &> /dev/null; then\n echo ${i}\n counter=$((counter + 1))\n fi\n done\n if [ ${counter} -lt 3 ]; then\n echo \"Error: Less than 3 _URL_DEFAULT variables found.\"\n exit 1\n else\n \ echo \"Success: ${counter} _URL_DEFAULT variables found.\"\n fi\n \ ]\n logger.go:42: 07:08:52 | watcher-api-scaling/1-deploy-with-defaults | + oc exec -n watcher-kuttl-default openstackclient -- openstack service list -f value -c Name -c Type\n logger.go:42: 07:08:52 | watcher-api-scaling/1-deploy-with-defaults | ++ grep -c '^watcher'\n logger.go:42: 07:08:54 | watcher-api-scaling/1-deploy-with-defaults | + '[' 1 == 1 ']'\n logger.go:42: 07:08:54 | watcher-api-scaling/1-deploy-with-defaults | ++ oc exec -n watcher-kuttl-default openstackclient -- openstack service list -f value -c Name -c Type -c ID\n logger.go:42: 07:08:54 | watcher-api-scaling/1-deploy-with-defaults | ++ awk '{print $1}'\n logger.go:42: 07:08:54 | watcher-api-scaling/1-deploy-with-defaults | ++ grep watcher\n logger.go:42: 07:08:56 | watcher-api-scaling/1-deploy-with-defaults | + SERVICEID=7bb23bb8113c4d3cac4445d032b0decb\n logger.go:42: 07:08:56 | watcher-api-scaling/1-deploy-with-defaults | ++ oc get -n watcher-kuttl-default keystoneservice watcher -o 'jsonpath={.status.serviceID}'\n \ logger.go:42: 07:08:57 | watcher-api-scaling/1-deploy-with-defaults | + '[' 7bb23bb8113c4d3cac4445d032b0decb == 7bb23bb8113c4d3cac4445d032b0decb ']'\n logger.go:42: 07:08:57 | watcher-api-scaling/1-deploy-with-defaults | ++ oc get -n watcher-kuttl-default watcher watcher-kuttl -o 'jsonpath={.status.hash.dbsync}'\n logger.go:42: 07:08:57 | watcher-api-scaling/1-deploy-with-defaults | + '[' -n nbbh56bh4h647h5fbh67dh699h5bdh85hb7h65fh676h555h9bh5c4hf9h684h55dh674h564h565h6fh578hd9h56bh5fchcch687h68fh568h688h69q ']'\n logger.go:42: 07:08:57 | watcher-api-scaling/1-deploy-with-defaults | ++ oc get pods -n openstack-operators -o name -l openstack.org/operator-name=watcher\n \ logger.go:42: 07:08:57 | watcher-api-scaling/1-deploy-with-defaults | + '[' '' == '' ']'\n logger.go:42: 07:08:57 | watcher-api-scaling/1-deploy-with-defaults | + exit 0\n logger.go:42: 07:08:58 | watcher-api-scaling/1-deploy-with-defaults | running command: [sh -c set -euxo pipefail\n oc exec -n watcher-kuttl-default openstackclient -- openstack service list -f value -c Name -c Type |[ $(grep -c ^watcher) == 1 ]\n SERVICEID=$(oc exec -n watcher-kuttl-default openstackclient -- openstack service list -f value -c Name -c Type -c ID | grep watcher| awk '{print $1}')\n [ $(oc get -n watcher-kuttl-default keystoneservice watcher -o jsonpath={.status.serviceID}) == ${SERVICEID} ]\n [ -n \"$(oc get -n watcher-kuttl-default watcher watcher-kuttl -o jsonpath={.status.hash.dbsync})\" ]\n # If we are running the container locally, skip following test\n if [ \"$(oc get pods -n openstack-operators -o name -l openstack.org/operator-name=watcher)\" == \"\" ]; then\n exit 0\n fi\n env_variables=$(oc set env $(oc get pods -n openstack-operators -o name -l openstack.org/operator-name=watcher) -n openstack-operators --list)\n \ counter=0\n for i in ${env_variables}; do\n if echo ${i} | grep '_URL_DEFAULT' &> /dev/null; then\n echo ${i}\n counter=$((counter + 1))\n fi\n done\n if [ ${counter} -lt 3 ]; then\n echo \"Error: Less than 3 _URL_DEFAULT variables found.\"\n exit 1\n else\n \ echo \"Success: ${counter} _URL_DEFAULT variables found.\"\n fi\n \ ]\n logger.go:42: 07:08:58 | watcher-api-scaling/1-deploy-with-defaults | + oc exec -n watcher-kuttl-default openstackclient -- openstack service list -f value -c Name -c Type\n logger.go:42: 07:08:58 | watcher-api-scaling/1-deploy-with-defaults | ++ grep -c '^watcher'\n logger.go:42: 07:09:00 | watcher-api-scaling/1-deploy-with-defaults | + '[' 1 == 1 ']'\n logger.go:42: 07:09:00 | watcher-api-scaling/1-deploy-with-defaults | ++ grep watcher\n logger.go:42: 07:09:00 | watcher-api-scaling/1-deploy-with-defaults | ++ awk '{print $1}'\n logger.go:42: 07:09:00 | watcher-api-scaling/1-deploy-with-defaults | ++ oc exec -n watcher-kuttl-default openstackclient -- openstack service list -f value -c Name -c Type -c ID\n logger.go:42: 07:09:02 | watcher-api-scaling/1-deploy-with-defaults | + SERVICEID=7bb23bb8113c4d3cac4445d032b0decb\n logger.go:42: 07:09:02 | watcher-api-scaling/1-deploy-with-defaults | ++ oc get -n watcher-kuttl-default keystoneservice watcher -o 'jsonpath={.status.serviceID}'\n \ logger.go:42: 07:09:03 | watcher-api-scaling/1-deploy-with-defaults | + '[' 7bb23bb8113c4d3cac4445d032b0decb == 7bb23bb8113c4d3cac4445d032b0decb ']'\n logger.go:42: 07:09:03 | watcher-api-scaling/1-deploy-with-defaults | ++ oc get -n watcher-kuttl-default watcher watcher-kuttl -o 'jsonpath={.status.hash.dbsync}'\n logger.go:42: 07:09:03 | watcher-api-scaling/1-deploy-with-defaults | + '[' -n nbbh56bh4h647h5fbh67dh699h5bdh85hb7h65fh676h555h9bh5c4hf9h684h55dh674h564h565h6fh578hd9h56bh5fchcch687h68fh568h688h69q ']'\n logger.go:42: 07:09:03 | watcher-api-scaling/1-deploy-with-defaults | ++ oc get pods -n openstack-operators -o name -l openstack.org/operator-name=watcher\n \ logger.go:42: 07:09:03 | watcher-api-scaling/1-deploy-with-defaults | + '[' '' == '' ']'\n logger.go:42: 07:09:03 | watcher-api-scaling/1-deploy-with-defaults | + exit 0\n logger.go:42: 07:09:03 | watcher-api-scaling/1-deploy-with-defaults | test step completed 1-deploy-with-defaults\n logger.go:42: 07:09:03 | watcher-api-scaling/2-scale-up-watcher-api | starting test step 2-scale-up-watcher-api\n logger.go:42: 07:09:03 | watcher-api-scaling/2-scale-up-watcher-api | running command: [sh -c oc patch watcher -n $NAMESPACE watcher-kuttl --type='json' -p='[{\"op\": \"replace\", \"path\": \"/spec/apiServiceTemplate/replicas\", \"value\":3}]'\n \ ]\n logger.go:42: 07:09:03 | watcher-api-scaling/2-scale-up-watcher-api | watcher.watcher.openstack.org/watcher-kuttl patched\n logger.go:42: 07:09:14 | watcher-api-scaling/2-scale-up-watcher-api | test step completed 2-scale-up-watcher-api\n \ logger.go:42: 07:09:14 | watcher-api-scaling/3-scale-down-watcher-api | starting test step 3-scale-down-watcher-api\n logger.go:42: 07:09:14 | watcher-api-scaling/3-scale-down-watcher-api | running command: [sh -c oc patch watcher -n $NAMESPACE watcher-kuttl --type='json' -p='[{\"op\": \"replace\", \"path\": \"/spec/apiServiceTemplate/replicas\", \"value\":1}]'\n \ ]\n logger.go:42: 07:09:15 | watcher-api-scaling/3-scale-down-watcher-api | watcher.watcher.openstack.org/watcher-kuttl patched\n logger.go:42: 07:09:19 | watcher-api-scaling/3-scale-down-watcher-api | test step completed 3-scale-down-watcher-api\n \ logger.go:42: 07:09:19 | watcher-api-scaling/4-scale-down-zero-watcher-api | starting test step 4-scale-down-zero-watcher-api\n logger.go:42: 07:09:19 | watcher-api-scaling/4-scale-down-zero-watcher-api | running command: [sh -c oc patch watcher -n $NAMESPACE watcher-kuttl --type='json' -p='[{\"op\": \"replace\", \"path\": \"/spec/apiServiceTemplate/replicas\", \"value\":0}]'\n \ ]\n logger.go:42: 07:09:19 | watcher-api-scaling/4-scale-down-zero-watcher-api | watcher.watcher.openstack.org/watcher-kuttl patched\n logger.go:42: 07:09:20 | watcher-api-scaling/4-scale-down-zero-watcher-api | test step completed 4-scale-down-zero-watcher-api\n \ logger.go:42: 07:09:20 | watcher-api-scaling/5-cleanup-watcher | starting test step 5-cleanup-watcher\n logger.go:42: 07:09:20 | watcher-api-scaling/5-cleanup-watcher | running command: [sh -c set -ex\n oc exec -n watcher-kuttl-default openstackclient -- openstack service list -f value -c Name -c Type | [ $(grep -c ^watcher) == 0 ]\n ]\n logger.go:42: 07:09:20 | watcher-api-scaling/5-cleanup-watcher | + oc exec -n watcher-kuttl-default openstackclient -- openstack service list -f value -c Name -c Type\n logger.go:42: 07:09:20 | watcher-api-scaling/5-cleanup-watcher | ++ grep -c '^watcher'\n logger.go:42: 07:09:23 | watcher-api-scaling/5-cleanup-watcher | + '[' 1 == 0 ']'\n logger.go:42: 07:09:24 | watcher-api-scaling/5-cleanup-watcher | running command: [sh -c set -ex\n oc exec -n watcher-kuttl-default openstackclient -- openstack service list -f value -c Name -c Type | [ $(grep -c ^watcher) == 0 ]\n ]\n logger.go:42: 07:09:24 | watcher-api-scaling/5-cleanup-watcher | + oc exec -n watcher-kuttl-default openstackclient -- openstack service list -f value -c Name -c Type\n logger.go:42: 07:09:24 | watcher-api-scaling/5-cleanup-watcher | ++ grep -c '^watcher'\n logger.go:42: 07:09:27 | watcher-api-scaling/5-cleanup-watcher | + '[' 0 == 0 ']'\n logger.go:42: 07:09:27 | watcher-api-scaling/5-cleanup-watcher | test step completed 5-cleanup-watcher\n logger.go:42: 07:09:27 | watcher-api-scaling | skipping kubernetes event logging\n=== CONT kuttl/harness/watcher-cinder\n logger.go:42: 07:09:27 | watcher-cinder | Skipping creation of user-supplied namespace: watcher-kuttl-default\n \ logger.go:42: 07:09:27 | watcher-cinder/0-cleanup-watcher | starting test step 0-cleanup-watcher\n logger.go:42: 07:09:27 | watcher-cinder/0-cleanup-watcher | test step completed 0-cleanup-watcher\n logger.go:42: 07:09:27 | watcher-cinder/1-deploy-watcher-no-cinder | starting test step 1-deploy-watcher-no-cinder\n logger.go:42: 07:09:27 | watcher-cinder/1-deploy-watcher-no-cinder | Watcher:watcher-kuttl-default/watcher-kuttl created\n logger.go:42: 07:09:27 | watcher-cinder/1-deploy-watcher-no-cinder | running command: [sh -c set -euxo pipefail\n # check that the decision engine correctly detects that there is no cinder service\n [ \"$(oc logs -n $NAMESPACE watcher-kuttl-decision-engine-0 | grep -c 'Block storage service is not enabled, skipping storage collector')\" == 2 ]\n ]\n logger.go:42: 07:09:27 | watcher-cinder/1-deploy-watcher-no-cinder | ++ oc logs -n watcher-kuttl-default watcher-kuttl-decision-engine-0\n logger.go:42: 07:09:27 | watcher-cinder/1-deploy-watcher-no-cinder | ++ grep -c 'Block storage service is not enabled, skipping storage collector'\n logger.go:42: 07:09:27 | watcher-cinder/1-deploy-watcher-no-cinder | error: error from server (NotFound): pods \"watcher-kuttl-decision-engine-0\" not found in namespace \"watcher-kuttl-default\"\n \ logger.go:42: 07:09:27 | watcher-cinder/1-deploy-watcher-no-cinder | + '[' 0 == 2 ']'\n logger.go:42: 07:09:28 | watcher-cinder/1-deploy-watcher-no-cinder | running command: [sh -c set -euxo pipefail\n # check that the decision engine correctly detects that there is no cinder service\n [ \"$(oc logs -n $NAMESPACE watcher-kuttl-decision-engine-0 | grep -c 'Block storage service is not enabled, skipping storage collector')\" == 2 ]\n ]\n logger.go:42: 07:09:28 | watcher-cinder/1-deploy-watcher-no-cinder | ++ oc logs -n watcher-kuttl-default watcher-kuttl-decision-engine-0\n logger.go:42: 07:09:28 | watcher-cinder/1-deploy-watcher-no-cinder | ++ grep -c 'Block storage service is not enabled, skipping storage collector'\n \ logger.go:42: 07:09:28 | watcher-cinder/1-deploy-watcher-no-cinder | error: error from server (NotFound): pods \"watcher-kuttl-decision-engine-0\" not found in namespace \"watcher-kuttl-default\"\n logger.go:42: 07:09:28 | watcher-cinder/1-deploy-watcher-no-cinder | + '[' 0 == 2 ']'\n logger.go:42: 07:09:29 | watcher-cinder/1-deploy-watcher-no-cinder | running command: [sh -c set -euxo pipefail\n # check that the decision engine correctly detects that there is no cinder service\n [ \"$(oc logs -n $NAMESPACE watcher-kuttl-decision-engine-0 | grep -c 'Block storage service is not enabled, skipping storage collector')\" == 2 ]\n ]\n logger.go:42: 07:09:29 | watcher-cinder/1-deploy-watcher-no-cinder | ++ oc logs -n watcher-kuttl-default watcher-kuttl-decision-engine-0\n logger.go:42: 07:09:29 | watcher-cinder/1-deploy-watcher-no-cinder | ++ grep -c 'Block storage service is not enabled, skipping storage collector'\n \ logger.go:42: 07:09:29 | watcher-cinder/1-deploy-watcher-no-cinder | error: error from server (NotFound): pods \"watcher-kuttl-decision-engine-0\" not found in namespace \"watcher-kuttl-default\"\n logger.go:42: 07:09:29 | watcher-cinder/1-deploy-watcher-no-cinder | + '[' 0 == 2 ']'\n logger.go:42: 07:09:30 | watcher-cinder/1-deploy-watcher-no-cinder | running command: [sh -c set -euxo pipefail\n # check that the decision engine correctly detects that there is no cinder service\n [ \"$(oc logs -n $NAMESPACE watcher-kuttl-decision-engine-0 | grep -c 'Block storage service is not enabled, skipping storage collector')\" == 2 ]\n ]\n logger.go:42: 07:09:30 | watcher-cinder/1-deploy-watcher-no-cinder | ++ oc logs -n watcher-kuttl-default watcher-kuttl-decision-engine-0\n logger.go:42: 07:09:30 | watcher-cinder/1-deploy-watcher-no-cinder | ++ grep -c 'Block storage service is not enabled, skipping storage collector'\n \ logger.go:42: 07:09:31 | watcher-cinder/1-deploy-watcher-no-cinder | error: error from server (NotFound): pods \"watcher-kuttl-decision-engine-0\" not found in namespace \"watcher-kuttl-default\"\n logger.go:42: 07:09:31 | watcher-cinder/1-deploy-watcher-no-cinder | + '[' 0 == 2 ']'\n logger.go:42: 07:09:32 | watcher-cinder/1-deploy-watcher-no-cinder | running command: [sh -c set -euxo pipefail\n # check that the decision engine correctly detects that there is no cinder service\n [ \"$(oc logs -n $NAMESPACE watcher-kuttl-decision-engine-0 | grep -c 'Block storage service is not enabled, skipping storage collector')\" == 2 ]\n ]\n logger.go:42: 07:09:32 | watcher-cinder/1-deploy-watcher-no-cinder | ++ oc logs -n watcher-kuttl-default watcher-kuttl-decision-engine-0\n logger.go:42: 07:09:32 | watcher-cinder/1-deploy-watcher-no-cinder | ++ grep -c 'Block storage service is not enabled, skipping storage collector'\n \ logger.go:42: 07:09:32 | watcher-cinder/1-deploy-watcher-no-cinder | error: error from server (NotFound): pods \"watcher-kuttl-decision-engine-0\" not found in namespace \"watcher-kuttl-default\"\n logger.go:42: 07:09:32 | watcher-cinder/1-deploy-watcher-no-cinder | + '[' 0 == 2 ']'\n logger.go:42: 07:09:33 | watcher-cinder/1-deploy-watcher-no-cinder | running command: [sh -c set -euxo pipefail\n # check that the decision engine correctly detects that there is no cinder service\n [ \"$(oc logs -n $NAMESPACE watcher-kuttl-decision-engine-0 | grep -c 'Block storage service is not enabled, skipping storage collector')\" == 2 ]\n ]\n logger.go:42: 07:09:33 | watcher-cinder/1-deploy-watcher-no-cinder | ++ oc logs -n watcher-kuttl-default watcher-kuttl-decision-engine-0\n logger.go:42: 07:09:33 | watcher-cinder/1-deploy-watcher-no-cinder | ++ grep -c 'Block storage service is not enabled, skipping storage collector'\n \ logger.go:42: 07:09:33 | watcher-cinder/1-deploy-watcher-no-cinder | error: error from server (NotFound): pods \"watcher-kuttl-decision-engine-0\" not found in namespace \"watcher-kuttl-default\"\n logger.go:42: 07:09:33 | watcher-cinder/1-deploy-watcher-no-cinder | + '[' 0 == 2 ']'\n logger.go:42: 07:09:34 | watcher-cinder/1-deploy-watcher-no-cinder | running command: [sh -c set -euxo pipefail\n # check that the decision engine correctly detects that there is no cinder service\n [ \"$(oc logs -n $NAMESPACE watcher-kuttl-decision-engine-0 | grep -c 'Block storage service is not enabled, skipping storage collector')\" == 2 ]\n ]\n logger.go:42: 07:09:34 | watcher-cinder/1-deploy-watcher-no-cinder | ++ oc logs -n watcher-kuttl-default watcher-kuttl-decision-engine-0\n logger.go:42: 07:09:34 | watcher-cinder/1-deploy-watcher-no-cinder | ++ grep -c 'Block storage service is not enabled, skipping storage collector'\n \ logger.go:42: 07:09:34 | watcher-cinder/1-deploy-watcher-no-cinder | error: error from server (NotFound): pods \"watcher-kuttl-decision-engine-0\" not found in namespace \"watcher-kuttl-default\"\n logger.go:42: 07:09:34 | watcher-cinder/1-deploy-watcher-no-cinder | + '[' 0 == 2 ']'\n logger.go:42: 07:09:35 | watcher-cinder/1-deploy-watcher-no-cinder | running command: [sh -c set -euxo pipefail\n # check that the decision engine correctly detects that there is no cinder service\n [ \"$(oc logs -n $NAMESPACE watcher-kuttl-decision-engine-0 | grep -c 'Block storage service is not enabled, skipping storage collector')\" == 2 ]\n ]\n logger.go:42: 07:09:35 | watcher-cinder/1-deploy-watcher-no-cinder | ++ oc logs -n watcher-kuttl-default watcher-kuttl-decision-engine-0\n logger.go:42: 07:09:35 | watcher-cinder/1-deploy-watcher-no-cinder | ++ grep -c 'Block storage service is not enabled, skipping storage collector'\n \ logger.go:42: 07:09:35 | watcher-cinder/1-deploy-watcher-no-cinder | error: error from server (NotFound): pods \"watcher-kuttl-decision-engine-0\" not found in namespace \"watcher-kuttl-default\"\n logger.go:42: 07:09:35 | watcher-cinder/1-deploy-watcher-no-cinder | + '[' 0 == 2 ']'\n logger.go:42: 07:09:36 | watcher-cinder/1-deploy-watcher-no-cinder | running command: [sh -c set -euxo pipefail\n # check that the decision engine correctly detects that there is no cinder service\n [ \"$(oc logs -n $NAMESPACE watcher-kuttl-decision-engine-0 | grep -c 'Block storage service is not enabled, skipping storage collector')\" == 2 ]\n ]\n logger.go:42: 07:09:36 | watcher-cinder/1-deploy-watcher-no-cinder | ++ grep -c 'Block storage service is not enabled, skipping storage collector'\n logger.go:42: 07:09:36 | watcher-cinder/1-deploy-watcher-no-cinder | ++ oc logs -n watcher-kuttl-default watcher-kuttl-decision-engine-0\n logger.go:42: 07:09:36 | watcher-cinder/1-deploy-watcher-no-cinder | error: error from server (NotFound): pods \"watcher-kuttl-decision-engine-0\" not found in namespace \"watcher-kuttl-default\"\n logger.go:42: 07:09:36 | watcher-cinder/1-deploy-watcher-no-cinder | + '[' 0 == 2 ']'\n logger.go:42: 07:09:37 | watcher-cinder/1-deploy-watcher-no-cinder | running command: [sh -c set -euxo pipefail\n # check that the decision engine correctly detects that there is no cinder service\n [ \"$(oc logs -n $NAMESPACE watcher-kuttl-decision-engine-0 | grep -c 'Block storage service is not enabled, skipping storage collector')\" == 2 ]\n ]\n logger.go:42: 07:09:37 | watcher-cinder/1-deploy-watcher-no-cinder | ++ oc logs -n watcher-kuttl-default watcher-kuttl-decision-engine-0\n logger.go:42: 07:09:37 | watcher-cinder/1-deploy-watcher-no-cinder | ++ grep -c 'Block storage service is not enabled, skipping storage collector'\n \ logger.go:42: 07:09:38 | watcher-cinder/1-deploy-watcher-no-cinder | error: error from server (NotFound): pods \"watcher-kuttl-decision-engine-0\" not found in namespace \"watcher-kuttl-default\"\n logger.go:42: 07:09:38 | watcher-cinder/1-deploy-watcher-no-cinder | + '[' 0 == 2 ']'\n logger.go:42: 07:09:39 | watcher-cinder/1-deploy-watcher-no-cinder | running command: [sh -c set -euxo pipefail\n # check that the decision engine correctly detects that there is no cinder service\n [ \"$(oc logs -n $NAMESPACE watcher-kuttl-decision-engine-0 | grep -c 'Block storage service is not enabled, skipping storage collector')\" == 2 ]\n ]\n logger.go:42: 07:09:39 | watcher-cinder/1-deploy-watcher-no-cinder | ++ oc logs -n watcher-kuttl-default watcher-kuttl-decision-engine-0\n logger.go:42: 07:09:39 | watcher-cinder/1-deploy-watcher-no-cinder | ++ grep -c 'Block storage service is not enabled, skipping storage collector'\n \ logger.go:42: 07:09:39 | watcher-cinder/1-deploy-watcher-no-cinder | error: error from server (NotFound): pods \"watcher-kuttl-decision-engine-0\" not found in namespace \"watcher-kuttl-default\"\n logger.go:42: 07:09:39 | watcher-cinder/1-deploy-watcher-no-cinder | + '[' 0 == 2 ']'\n logger.go:42: 07:09:40 | watcher-cinder/1-deploy-watcher-no-cinder | running command: [sh -c set -euxo pipefail\n # check that the decision engine correctly detects that there is no cinder service\n [ \"$(oc logs -n $NAMESPACE watcher-kuttl-decision-engine-0 | grep -c 'Block storage service is not enabled, skipping storage collector')\" == 2 ]\n ]\n logger.go:42: 07:09:40 | watcher-cinder/1-deploy-watcher-no-cinder | ++ oc logs -n watcher-kuttl-default watcher-kuttl-decision-engine-0\n logger.go:42: 07:09:40 | watcher-cinder/1-deploy-watcher-no-cinder | ++ grep -c 'Block storage service is not enabled, skipping storage collector'\n \ logger.go:42: 07:09:40 | watcher-cinder/1-deploy-watcher-no-cinder | Error from server (BadRequest): container \"watcher-decision-engine\" in pod \"watcher-kuttl-decision-engine-0\" is waiting to start: ContainerCreating\n logger.go:42: 07:09:40 | watcher-cinder/1-deploy-watcher-no-cinder | + '[' 0 == 2 ']'\n logger.go:42: 07:09:41 | watcher-cinder/1-deploy-watcher-no-cinder | running command: [sh -c set -euxo pipefail\n # check that the decision engine correctly detects that there is no cinder service\n [ \"$(oc logs -n $NAMESPACE watcher-kuttl-decision-engine-0 | grep -c 'Block storage service is not enabled, skipping storage collector')\" == 2 ]\n ]\n logger.go:42: 07:09:41 | watcher-cinder/1-deploy-watcher-no-cinder | ++ oc logs -n watcher-kuttl-default watcher-kuttl-decision-engine-0\n logger.go:42: 07:09:41 | watcher-cinder/1-deploy-watcher-no-cinder | ++ grep -c 'Block storage service is not enabled, skipping storage collector'\n \ logger.go:42: 07:09:41 | watcher-cinder/1-deploy-watcher-no-cinder | + '[' 0 == 2 ']'\n logger.go:42: 07:09:42 | watcher-cinder/1-deploy-watcher-no-cinder | running command: [sh -c set -euxo pipefail\n # check that the decision engine correctly detects that there is no cinder service\n [ \"$(oc logs -n $NAMESPACE watcher-kuttl-decision-engine-0 | grep -c 'Block storage service is not enabled, skipping storage collector')\" == 2 ]\n ]\n logger.go:42: 07:09:42 | watcher-cinder/1-deploy-watcher-no-cinder | ++ oc logs -n watcher-kuttl-default watcher-kuttl-decision-engine-0\n logger.go:42: 07:09:42 | watcher-cinder/1-deploy-watcher-no-cinder | ++ grep -c 'Block storage service is not enabled, skipping storage collector'\n \ logger.go:42: 07:09:42 | watcher-cinder/1-deploy-watcher-no-cinder | + '[' 0 == 2 ']'\n logger.go:42: 07:09:43 | watcher-cinder/1-deploy-watcher-no-cinder | running command: [sh -c set -euxo pipefail\n # check that the decision engine correctly detects that there is no cinder service\n [ \"$(oc logs -n $NAMESPACE watcher-kuttl-decision-engine-0 | grep -c 'Block storage service is not enabled, skipping storage collector')\" == 2 ]\n ]\n logger.go:42: 07:09:43 | watcher-cinder/1-deploy-watcher-no-cinder | ++ oc logs -n watcher-kuttl-default watcher-kuttl-decision-engine-0\n logger.go:42: 07:09:43 | watcher-cinder/1-deploy-watcher-no-cinder | ++ grep -c 'Block storage service is not enabled, skipping storage collector'\n \ logger.go:42: 07:09:44 | watcher-cinder/1-deploy-watcher-no-cinder | + '[' 1 == 2 ']'\n logger.go:42: 07:09:45 | watcher-cinder/1-deploy-watcher-no-cinder | running command: [sh -c set -euxo pipefail\n # check that the decision engine correctly detects that there is no cinder service\n [ \"$(oc logs -n $NAMESPACE watcher-kuttl-decision-engine-0 | grep -c 'Block storage service is not enabled, skipping storage collector')\" == 2 ]\n ]\n logger.go:42: 07:09:45 | watcher-cinder/1-deploy-watcher-no-cinder | ++ oc logs -n watcher-kuttl-default watcher-kuttl-decision-engine-0\n logger.go:42: 07:09:45 | watcher-cinder/1-deploy-watcher-no-cinder | ++ grep -c 'Block storage service is not enabled, skipping storage collector'\n \ logger.go:42: 07:09:45 | watcher-cinder/1-deploy-watcher-no-cinder | + '[' 2 == 2 ']'\n logger.go:42: 07:09:46 | watcher-cinder/1-deploy-watcher-no-cinder | running command: [sh -c set -euxo pipefail\n # check that the decision engine correctly detects that there is no cinder service\n [ \"$(oc logs -n $NAMESPACE watcher-kuttl-decision-engine-0 | grep -c 'Block storage service is not enabled, skipping storage collector')\" == 2 ]\n ]\n logger.go:42: 07:09:46 | watcher-cinder/1-deploy-watcher-no-cinder | ++ oc logs -n watcher-kuttl-default watcher-kuttl-decision-engine-0\n logger.go:42: 07:09:46 | watcher-cinder/1-deploy-watcher-no-cinder | ++ grep -c 'Block storage service is not enabled, skipping storage collector'\n \ logger.go:42: 07:09:46 | watcher-cinder/1-deploy-watcher-no-cinder | + '[' 2 == 2 ']'\n logger.go:42: 07:09:47 | watcher-cinder/1-deploy-watcher-no-cinder | running command: [sh -c set -euxo pipefail\n # check that the decision engine correctly detects that there is no cinder service\n [ \"$(oc logs -n $NAMESPACE watcher-kuttl-decision-engine-0 | grep -c 'Block storage service is not enabled, skipping storage collector')\" == 2 ]\n ]\n logger.go:42: 07:09:47 | watcher-cinder/1-deploy-watcher-no-cinder | ++ oc logs -n watcher-kuttl-default watcher-kuttl-decision-engine-0\n logger.go:42: 07:09:47 | watcher-cinder/1-deploy-watcher-no-cinder | ++ grep -c 'Block storage service is not enabled, skipping storage collector'\n \ logger.go:42: 07:09:47 | watcher-cinder/1-deploy-watcher-no-cinder | + '[' 2 == 2 ']'\n logger.go:42: 07:09:48 | watcher-cinder/1-deploy-watcher-no-cinder | running command: [sh -c set -euxo pipefail\n # check that the decision engine correctly detects that there is no cinder service\n [ \"$(oc logs -n $NAMESPACE watcher-kuttl-decision-engine-0 | grep -c 'Block storage service is not enabled, skipping storage collector')\" == 2 ]\n ]\n logger.go:42: 07:09:48 | watcher-cinder/1-deploy-watcher-no-cinder | ++ oc logs -n watcher-kuttl-default watcher-kuttl-decision-engine-0\n logger.go:42: 07:09:48 | watcher-cinder/1-deploy-watcher-no-cinder | ++ grep -c 'Block storage service is not enabled, skipping storage collector'\n \ logger.go:42: 07:09:48 | watcher-cinder/1-deploy-watcher-no-cinder | + '[' 2 == 2 ']'\n logger.go:42: 07:09:49 | watcher-cinder/1-deploy-watcher-no-cinder | running command: [sh -c set -euxo pipefail\n # check that the decision engine correctly detects that there is no cinder service\n [ \"$(oc logs -n $NAMESPACE watcher-kuttl-decision-engine-0 | grep -c 'Block storage service is not enabled, skipping storage collector')\" == 2 ]\n ]\n logger.go:42: 07:09:50 | watcher-cinder/1-deploy-watcher-no-cinder | ++ oc logs -n watcher-kuttl-default watcher-kuttl-decision-engine-0\n logger.go:42: 07:09:50 | watcher-cinder/1-deploy-watcher-no-cinder | ++ grep -c 'Block storage service is not enabled, skipping storage collector'\n \ logger.go:42: 07:09:50 | watcher-cinder/1-deploy-watcher-no-cinder | + '[' 2 == 2 ']'\n logger.go:42: 07:09:51 | watcher-cinder/1-deploy-watcher-no-cinder | running command: [sh -c set -euxo pipefail\n # check that the decision engine correctly detects that there is no cinder service\n [ \"$(oc logs -n $NAMESPACE watcher-kuttl-decision-engine-0 | grep -c 'Block storage service is not enabled, skipping storage collector')\" == 2 ]\n ]\n logger.go:42: 07:09:51 | watcher-cinder/1-deploy-watcher-no-cinder | ++ oc logs -n watcher-kuttl-default watcher-kuttl-decision-engine-0\n logger.go:42: 07:09:51 | watcher-cinder/1-deploy-watcher-no-cinder | ++ grep -c 'Block storage service is not enabled, skipping storage collector'\n \ logger.go:42: 07:09:51 | watcher-cinder/1-deploy-watcher-no-cinder | + '[' 2 == 2 ']'\n logger.go:42: 07:09:51 | watcher-cinder/1-deploy-watcher-no-cinder | test step completed 1-deploy-watcher-no-cinder\n logger.go:42: 07:09:51 | watcher-cinder/2-deploy-cinder | starting test step 2-deploy-cinder\n logger.go:42: 07:09:51 | watcher-cinder/2-deploy-cinder | OpenStackControlPlane:watcher-kuttl-default/openstack updated\n logger.go:42: 07:09:51 | watcher-cinder/2-deploy-cinder | running command: [sh -c set -euxo pipefail\n \ # check that the decision detects that there is a cinder service and\n # does not log that storage collector is skipped\n [ \"$(oc logs -n $NAMESPACE watcher-kuttl-decision-engine-0 |grep -c 'Block storage service is not enabled, skipping storage collector')\" == 0 ]\n ]\n logger.go:42: 07:09:51 | watcher-cinder/2-deploy-cinder | ++ oc logs -n watcher-kuttl-default watcher-kuttl-decision-engine-0\n logger.go:42: 07:09:51 | watcher-cinder/2-deploy-cinder | ++ grep -c 'Block storage service is not enabled, skipping storage collector'\n logger.go:42: 07:09:51 | watcher-cinder/2-deploy-cinder | + '[' 2 == 0 ']'\n logger.go:42: 07:09:52 | watcher-cinder/2-deploy-cinder | running command: [sh -c set -euxo pipefail\n # check that the decision detects that there is a cinder service and\n # does not log that storage collector is skipped\n [ \"$(oc logs -n $NAMESPACE watcher-kuttl-decision-engine-0 |grep -c 'Block storage service is not enabled, skipping storage collector')\" == 0 ]\n ]\n logger.go:42: 07:09:52 | watcher-cinder/2-deploy-cinder | ++ oc logs -n watcher-kuttl-default watcher-kuttl-decision-engine-0\n logger.go:42: 07:09:52 | watcher-cinder/2-deploy-cinder | ++ grep -c 'Block storage service is not enabled, skipping storage collector'\n logger.go:42: 07:09:52 | watcher-cinder/2-deploy-cinder | + '[' 2 == 0 ']'\n logger.go:42: 07:09:53 | watcher-cinder/2-deploy-cinder | running command: [sh -c set -euxo pipefail\n # check that the decision detects that there is a cinder service and\n # does not log that storage collector is skipped\n [ \"$(oc logs -n $NAMESPACE watcher-kuttl-decision-engine-0 |grep -c 'Block storage service is not enabled, skipping storage collector')\" == 0 ]\n ]\n logger.go:42: 07:09:53 | watcher-cinder/2-deploy-cinder | ++ oc logs -n watcher-kuttl-default watcher-kuttl-decision-engine-0\n logger.go:42: 07:09:53 | watcher-cinder/2-deploy-cinder | ++ grep -c 'Block storage service is not enabled, skipping storage collector'\n logger.go:42: 07:09:54 | watcher-cinder/2-deploy-cinder | + '[' 2 == 0 ']'\n logger.go:42: 07:09:55 | watcher-cinder/2-deploy-cinder | running command: [sh -c set -euxo pipefail\n # check that the decision detects that there is a cinder service and\n # does not log that storage collector is skipped\n [ \"$(oc logs -n $NAMESPACE watcher-kuttl-decision-engine-0 |grep -c 'Block storage service is not enabled, skipping storage collector')\" == 0 ]\n ]\n logger.go:42: 07:09:55 | watcher-cinder/2-deploy-cinder | ++ oc logs -n watcher-kuttl-default watcher-kuttl-decision-engine-0\n logger.go:42: 07:09:55 | watcher-cinder/2-deploy-cinder | ++ grep -c 'Block storage service is not enabled, skipping storage collector'\n logger.go:42: 07:09:55 | watcher-cinder/2-deploy-cinder | + '[' 2 == 0 ']'\n logger.go:42: 07:09:56 | watcher-cinder/2-deploy-cinder | running command: [sh -c set -euxo pipefail\n # check that the decision detects that there is a cinder service and\n # does not log that storage collector is skipped\n [ \"$(oc logs -n $NAMESPACE watcher-kuttl-decision-engine-0 |grep -c 'Block storage service is not enabled, skipping storage collector')\" == 0 ]\n ]\n logger.go:42: 07:09:56 | watcher-cinder/2-deploy-cinder | ++ oc logs -n watcher-kuttl-default watcher-kuttl-decision-engine-0\n logger.go:42: 07:09:56 | watcher-cinder/2-deploy-cinder | ++ grep -c 'Block storage service is not enabled, skipping storage collector'\n logger.go:42: 07:09:56 | watcher-cinder/2-deploy-cinder | + '[' 2 == 0 ']'\n logger.go:42: 07:09:57 | watcher-cinder/2-deploy-cinder | running command: [sh -c set -euxo pipefail\n # check that the decision detects that there is a cinder service and\n # does not log that storage collector is skipped\n [ \"$(oc logs -n $NAMESPACE watcher-kuttl-decision-engine-0 |grep -c 'Block storage service is not enabled, skipping storage collector')\" == 0 ]\n ]\n logger.go:42: 07:09:57 | watcher-cinder/2-deploy-cinder | ++ oc logs -n watcher-kuttl-default watcher-kuttl-decision-engine-0\n logger.go:42: 07:09:57 | watcher-cinder/2-deploy-cinder | ++ grep -c 'Block storage service is not enabled, skipping storage collector'\n logger.go:42: 07:09:57 | watcher-cinder/2-deploy-cinder | + '[' 2 == 0 ']'\n logger.go:42: 07:09:58 | watcher-cinder/2-deploy-cinder | running command: [sh -c set -euxo pipefail\n # check that the decision detects that there is a cinder service and\n # does not log that storage collector is skipped\n [ \"$(oc logs -n $NAMESPACE watcher-kuttl-decision-engine-0 |grep -c 'Block storage service is not enabled, skipping storage collector')\" == 0 ]\n ]\n logger.go:42: 07:09:58 | watcher-cinder/2-deploy-cinder | ++ oc logs -n watcher-kuttl-default watcher-kuttl-decision-engine-0\n logger.go:42: 07:09:58 | watcher-cinder/2-deploy-cinder | ++ grep -c 'Block storage service is not enabled, skipping storage collector'\n logger.go:42: 07:09:59 | watcher-cinder/2-deploy-cinder | + '[' 2 == 0 ']'\n logger.go:42: 07:10:00 | watcher-cinder/2-deploy-cinder | running command: [sh -c set -euxo pipefail\n # check that the decision detects that there is a cinder service and\n # does not log that storage collector is skipped\n [ \"$(oc logs -n $NAMESPACE watcher-kuttl-decision-engine-0 |grep -c 'Block storage service is not enabled, skipping storage collector')\" == 0 ]\n ]\n logger.go:42: 07:10:00 | watcher-cinder/2-deploy-cinder | ++ oc logs -n watcher-kuttl-default watcher-kuttl-decision-engine-0\n logger.go:42: 07:10:00 | watcher-cinder/2-deploy-cinder | ++ grep -c 'Block storage service is not enabled, skipping storage collector'\n logger.go:42: 07:10:00 | watcher-cinder/2-deploy-cinder | + '[' 2 == 0 ']'\n logger.go:42: 07:10:01 | watcher-cinder/2-deploy-cinder | running command: [sh -c set -euxo pipefail\n # check that the decision detects that there is a cinder service and\n # does not log that storage collector is skipped\n [ \"$(oc logs -n $NAMESPACE watcher-kuttl-decision-engine-0 |grep -c 'Block storage service is not enabled, skipping storage collector')\" == 0 ]\n ]\n logger.go:42: 07:10:01 | watcher-cinder/2-deploy-cinder | ++ oc logs -n watcher-kuttl-default watcher-kuttl-decision-engine-0\n logger.go:42: 07:10:01 | watcher-cinder/2-deploy-cinder | ++ grep -c 'Block storage service is not enabled, skipping storage collector'\n logger.go:42: 07:10:01 | watcher-cinder/2-deploy-cinder | + '[' 2 == 0 ']'\n logger.go:42: 07:10:02 | watcher-cinder/2-deploy-cinder | running command: [sh -c set -euxo pipefail\n # check that the decision detects that there is a cinder service and\n # does not log that storage collector is skipped\n [ \"$(oc logs -n $NAMESPACE watcher-kuttl-decision-engine-0 |grep -c 'Block storage service is not enabled, skipping storage collector')\" == 0 ]\n ]\n logger.go:42: 07:10:02 | watcher-cinder/2-deploy-cinder | ++ oc logs -n watcher-kuttl-default watcher-kuttl-decision-engine-0\n logger.go:42: 07:10:02 | watcher-cinder/2-deploy-cinder | ++ grep -c 'Block storage service is not enabled, skipping storage collector'\n logger.go:42: 07:10:02 | watcher-cinder/2-deploy-cinder | + '[' 2 == 0 ']'\n logger.go:42: 07:10:03 | watcher-cinder/2-deploy-cinder | running command: [sh -c set -euxo pipefail\n # check that the decision detects that there is a cinder service and\n # does not log that storage collector is skipped\n [ \"$(oc logs -n $NAMESPACE watcher-kuttl-decision-engine-0 |grep -c 'Block storage service is not enabled, skipping storage collector')\" == 0 ]\n ]\n logger.go:42: 07:10:03 | watcher-cinder/2-deploy-cinder | ++ oc logs -n watcher-kuttl-default watcher-kuttl-decision-engine-0\n logger.go:42: 07:10:03 | watcher-cinder/2-deploy-cinder | ++ grep -c 'Block storage service is not enabled, skipping storage collector'\n logger.go:42: 07:10:04 | watcher-cinder/2-deploy-cinder | + '[' 2 == 0 ']'\n logger.go:42: 07:10:05 | watcher-cinder/2-deploy-cinder | running command: [sh -c set -euxo pipefail\n # check that the decision detects that there is a cinder service and\n # does not log that storage collector is skipped\n [ \"$(oc logs -n $NAMESPACE watcher-kuttl-decision-engine-0 |grep -c 'Block storage service is not enabled, skipping storage collector')\" == 0 ]\n ]\n logger.go:42: 07:10:05 | watcher-cinder/2-deploy-cinder | ++ oc logs -n watcher-kuttl-default watcher-kuttl-decision-engine-0\n logger.go:42: 07:10:05 | watcher-cinder/2-deploy-cinder | ++ grep -c 'Block storage service is not enabled, skipping storage collector'\n logger.go:42: 07:10:05 | watcher-cinder/2-deploy-cinder | + '[' 2 == 0 ']'\n logger.go:42: 07:10:06 | watcher-cinder/2-deploy-cinder | running command: [sh -c set -euxo pipefail\n # check that the decision detects that there is a cinder service and\n # does not log that storage collector is skipped\n [ \"$(oc logs -n $NAMESPACE watcher-kuttl-decision-engine-0 |grep -c 'Block storage service is not enabled, skipping storage collector')\" == 0 ]\n ]\n logger.go:42: 07:10:06 | watcher-cinder/2-deploy-cinder | ++ grep -c 'Block storage service is not enabled, skipping storage collector'\n logger.go:42: 07:10:06 | watcher-cinder/2-deploy-cinder | ++ oc logs -n watcher-kuttl-default watcher-kuttl-decision-engine-0\n logger.go:42: 07:10:06 | watcher-cinder/2-deploy-cinder | + '[' 2 == 0 ']'\n logger.go:42: 07:10:07 | watcher-cinder/2-deploy-cinder | running command: [sh -c set -euxo pipefail\n # check that the decision detects that there is a cinder service and\n # does not log that storage collector is skipped\n [ \"$(oc logs -n $NAMESPACE watcher-kuttl-decision-engine-0 |grep -c 'Block storage service is not enabled, skipping storage collector')\" == 0 ]\n ]\n logger.go:42: 07:10:07 | watcher-cinder/2-deploy-cinder | ++ oc logs -n watcher-kuttl-default watcher-kuttl-decision-engine-0\n logger.go:42: 07:10:07 | watcher-cinder/2-deploy-cinder | ++ grep -c 'Block storage service is not enabled, skipping storage collector'\n logger.go:42: 07:10:07 | watcher-cinder/2-deploy-cinder | + '[' 2 == 0 ']'\n logger.go:42: 07:10:08 | watcher-cinder/2-deploy-cinder | running command: [sh -c set -euxo pipefail\n # check that the decision detects that there is a cinder service and\n # does not log that storage collector is skipped\n [ \"$(oc logs -n $NAMESPACE watcher-kuttl-decision-engine-0 |grep -c 'Block storage service is not enabled, skipping storage collector')\" == 0 ]\n ]\n logger.go:42: 07:10:08 | watcher-cinder/2-deploy-cinder | ++ oc logs -n watcher-kuttl-default watcher-kuttl-decision-engine-0\n logger.go:42: 07:10:08 | watcher-cinder/2-deploy-cinder | ++ grep -c 'Block storage service is not enabled, skipping storage collector'\n logger.go:42: 07:10:08 | watcher-cinder/2-deploy-cinder | + '[' 2 == 0 ']'\n logger.go:42: 07:10:09 | watcher-cinder/2-deploy-cinder | running command: [sh -c set -euxo pipefail\n # check that the decision detects that there is a cinder service and\n # does not log that storage collector is skipped\n [ \"$(oc logs -n $NAMESPACE watcher-kuttl-decision-engine-0 |grep -c 'Block storage service is not enabled, skipping storage collector')\" == 0 ]\n ]\n logger.go:42: 07:10:09 | watcher-cinder/2-deploy-cinder | ++ oc logs -n watcher-kuttl-default watcher-kuttl-decision-engine-0\n logger.go:42: 07:10:09 | watcher-cinder/2-deploy-cinder | ++ grep -c 'Block storage service is not enabled, skipping storage collector'\n logger.go:42: 07:10:10 | watcher-cinder/2-deploy-cinder | + '[' 2 == 0 ']'\n logger.go:42: 07:10:11 | watcher-cinder/2-deploy-cinder | running command: [sh -c set -euxo pipefail\n # check that the decision detects that there is a cinder service and\n # does not log that storage collector is skipped\n [ \"$(oc logs -n $NAMESPACE watcher-kuttl-decision-engine-0 |grep -c 'Block storage service is not enabled, skipping storage collector')\" == 0 ]\n ]\n logger.go:42: 07:10:11 | watcher-cinder/2-deploy-cinder | ++ oc logs -n watcher-kuttl-default watcher-kuttl-decision-engine-0\n logger.go:42: 07:10:11 | watcher-cinder/2-deploy-cinder | ++ grep -c 'Block storage service is not enabled, skipping storage collector'\n logger.go:42: 07:10:11 | watcher-cinder/2-deploy-cinder | + '[' 2 == 0 ']'\n logger.go:42: 07:10:12 | watcher-cinder/2-deploy-cinder | running command: [sh -c set -euxo pipefail\n # check that the decision detects that there is a cinder service and\n # does not log that storage collector is skipped\n [ \"$(oc logs -n $NAMESPACE watcher-kuttl-decision-engine-0 |grep -c 'Block storage service is not enabled, skipping storage collector')\" == 0 ]\n ]\n logger.go:42: 07:10:12 | watcher-cinder/2-deploy-cinder | ++ oc logs -n watcher-kuttl-default watcher-kuttl-decision-engine-0\n logger.go:42: 07:10:12 | watcher-cinder/2-deploy-cinder | ++ grep -c 'Block storage service is not enabled, skipping storage collector'\n logger.go:42: 07:10:12 | watcher-cinder/2-deploy-cinder | + '[' 2 == 0 ']'\n logger.go:42: 07:10:13 | watcher-cinder/2-deploy-cinder | running command: [sh -c set -euxo pipefail\n # check that the decision detects that there is a cinder service and\n # does not log that storage collector is skipped\n [ \"$(oc logs -n $NAMESPACE watcher-kuttl-decision-engine-0 |grep -c 'Block storage service is not enabled, skipping storage collector')\" == 0 ]\n ]\n logger.go:42: 07:10:13 | watcher-cinder/2-deploy-cinder | ++ oc logs -n watcher-kuttl-default watcher-kuttl-decision-engine-0\n logger.go:42: 07:10:13 | watcher-cinder/2-deploy-cinder | ++ grep -c 'Block storage service is not enabled, skipping storage collector'\n logger.go:42: 07:10:13 | watcher-cinder/2-deploy-cinder | + '[' 2 == 0 ']'\n logger.go:42: 07:10:14 | watcher-cinder/2-deploy-cinder | running command: [sh -c set -euxo pipefail\n # check that the decision detects that there is a cinder service and\n # does not log that storage collector is skipped\n [ \"$(oc logs -n $NAMESPACE watcher-kuttl-decision-engine-0 |grep -c 'Block storage service is not enabled, skipping storage collector')\" == 0 ]\n ]\n logger.go:42: 07:10:14 | watcher-cinder/2-deploy-cinder | ++ oc logs -n watcher-kuttl-default watcher-kuttl-decision-engine-0\n logger.go:42: 07:10:14 | watcher-cinder/2-deploy-cinder | ++ grep -c 'Block storage service is not enabled, skipping storage collector'\n logger.go:42: 07:10:15 | watcher-cinder/2-deploy-cinder | + '[' 2 == 0 ']'\n logger.go:42: 07:10:16 | watcher-cinder/2-deploy-cinder | running command: [sh -c set -euxo pipefail\n # check that the decision detects that there is a cinder service and\n # does not log that storage collector is skipped\n [ \"$(oc logs -n $NAMESPACE watcher-kuttl-decision-engine-0 |grep -c 'Block storage service is not enabled, skipping storage collector')\" == 0 ]\n ]\n logger.go:42: 07:10:16 | watcher-cinder/2-deploy-cinder | ++ grep -c 'Block storage service is not enabled, skipping storage collector'\n logger.go:42: 07:10:16 | watcher-cinder/2-deploy-cinder | ++ oc logs -n watcher-kuttl-default watcher-kuttl-decision-engine-0\n logger.go:42: 07:10:16 | watcher-cinder/2-deploy-cinder | + '[' 2 == 0 ']'\n logger.go:42: 07:10:17 | watcher-cinder/2-deploy-cinder | running command: [sh -c set -euxo pipefail\n # check that the decision detects that there is a cinder service and\n # does not log that storage collector is skipped\n [ \"$(oc logs -n $NAMESPACE watcher-kuttl-decision-engine-0 |grep -c 'Block storage service is not enabled, skipping storage collector')\" == 0 ]\n ]\n logger.go:42: 07:10:17 | watcher-cinder/2-deploy-cinder | ++ oc logs -n watcher-kuttl-default watcher-kuttl-decision-engine-0\n logger.go:42: 07:10:17 | watcher-cinder/2-deploy-cinder | ++ grep -c 'Block storage service is not enabled, skipping storage collector'\n logger.go:42: 07:10:17 | watcher-cinder/2-deploy-cinder | + '[' 2 == 0 ']'\n logger.go:42: 07:10:18 | watcher-cinder/2-deploy-cinder | running command: [sh -c set -euxo pipefail\n # check that the decision detects that there is a cinder service and\n # does not log that storage collector is skipped\n [ \"$(oc logs -n $NAMESPACE watcher-kuttl-decision-engine-0 |grep -c 'Block storage service is not enabled, skipping storage collector')\" == 0 ]\n ]\n logger.go:42: 07:10:18 | watcher-cinder/2-deploy-cinder | ++ oc logs -n watcher-kuttl-default watcher-kuttl-decision-engine-0\n logger.go:42: 07:10:18 | watcher-cinder/2-deploy-cinder | ++ grep -c 'Block storage service is not enabled, skipping storage collector'\n logger.go:42: 07:10:18 | watcher-cinder/2-deploy-cinder | + '[' 2 == 0 ']'\n logger.go:42: 07:10:19 | watcher-cinder/2-deploy-cinder | running command: [sh -c set -euxo pipefail\n # check that the decision detects that there is a cinder service and\n # does not log that storage collector is skipped\n [ \"$(oc logs -n $NAMESPACE watcher-kuttl-decision-engine-0 |grep -c 'Block storage service is not enabled, skipping storage collector')\" == 0 ]\n ]\n logger.go:42: 07:10:19 | watcher-cinder/2-deploy-cinder | ++ oc logs -n watcher-kuttl-default watcher-kuttl-decision-engine-0\n logger.go:42: 07:10:19 | watcher-cinder/2-deploy-cinder | ++ grep -c 'Block storage service is not enabled, skipping storage collector'\n logger.go:42: 07:10:20 | watcher-cinder/2-deploy-cinder | + '[' 2 == 0 ']'\n logger.go:42: 07:10:21 | watcher-cinder/2-deploy-cinder | running command: [sh -c set -euxo pipefail\n # check that the decision detects that there is a cinder service and\n # does not log that storage collector is skipped\n [ \"$(oc logs -n $NAMESPACE watcher-kuttl-decision-engine-0 |grep -c 'Block storage service is not enabled, skipping storage collector')\" == 0 ]\n ]\n logger.go:42: 07:10:21 | watcher-cinder/2-deploy-cinder | ++ grep -c 'Block storage service is not enabled, skipping storage collector'\n logger.go:42: 07:10:21 | watcher-cinder/2-deploy-cinder | ++ oc logs -n watcher-kuttl-default watcher-kuttl-decision-engine-0\n logger.go:42: 07:10:21 | watcher-cinder/2-deploy-cinder | + '[' 2 == 0 ']'\n logger.go:42: 07:10:22 | watcher-cinder/2-deploy-cinder | running command: [sh -c set -euxo pipefail\n # check that the decision detects that there is a cinder service and\n # does not log that storage collector is skipped\n [ \"$(oc logs -n $NAMESPACE watcher-kuttl-decision-engine-0 |grep -c 'Block storage service is not enabled, skipping storage collector')\" == 0 ]\n ]\n logger.go:42: 07:10:22 | watcher-cinder/2-deploy-cinder | ++ grep -c 'Block storage service is not enabled, skipping storage collector'\n logger.go:42: 07:10:22 | watcher-cinder/2-deploy-cinder | ++ oc logs -n watcher-kuttl-default watcher-kuttl-decision-engine-0\n logger.go:42: 07:10:22 | watcher-cinder/2-deploy-cinder | + '[' 2 == 0 ']'\n logger.go:42: 07:10:23 | watcher-cinder/2-deploy-cinder | running command: [sh -c set -euxo pipefail\n # check that the decision detects that there is a cinder service and\n # does not log that storage collector is skipped\n [ \"$(oc logs -n $NAMESPACE watcher-kuttl-decision-engine-0 |grep -c 'Block storage service is not enabled, skipping storage collector')\" == 0 ]\n ]\n logger.go:42: 07:10:23 | watcher-cinder/2-deploy-cinder | ++ oc logs -n watcher-kuttl-default watcher-kuttl-decision-engine-0\n logger.go:42: 07:10:23 | watcher-cinder/2-deploy-cinder | ++ grep -c 'Block storage service is not enabled, skipping storage collector'\n logger.go:42: 07:10:23 | watcher-cinder/2-deploy-cinder | + '[' 2 == 0 ']'\n logger.go:42: 07:10:24 | watcher-cinder/2-deploy-cinder | running command: [sh -c set -euxo pipefail\n # check that the decision detects that there is a cinder service and\n # does not log that storage collector is skipped\n [ \"$(oc logs -n $NAMESPACE watcher-kuttl-decision-engine-0 |grep -c 'Block storage service is not enabled, skipping storage collector')\" == 0 ]\n ]\n logger.go:42: 07:10:24 | watcher-cinder/2-deploy-cinder | ++ oc logs -n watcher-kuttl-default watcher-kuttl-decision-engine-0\n logger.go:42: 07:10:24 | watcher-cinder/2-deploy-cinder | ++ grep -c 'Block storage service is not enabled, skipping storage collector'\n logger.go:42: 07:10:25 | watcher-cinder/2-deploy-cinder | + '[' 2 == 0 ']'\n logger.go:42: 07:10:26 | watcher-cinder/2-deploy-cinder | running command: [sh -c set -euxo pipefail\n # check that the decision detects that there is a cinder service and\n # does not log that storage collector is skipped\n [ \"$(oc logs -n $NAMESPACE watcher-kuttl-decision-engine-0 |grep -c 'Block storage service is not enabled, skipping storage collector')\" == 0 ]\n ]\n logger.go:42: 07:10:26 | watcher-cinder/2-deploy-cinder | ++ oc logs -n watcher-kuttl-default watcher-kuttl-decision-engine-0\n logger.go:42: 07:10:26 | watcher-cinder/2-deploy-cinder | ++ grep -c 'Block storage service is not enabled, skipping storage collector'\n logger.go:42: 07:10:26 | watcher-cinder/2-deploy-cinder | + '[' 2 == 0 ']'\n logger.go:42: 07:10:27 | watcher-cinder/2-deploy-cinder | running command: [sh -c set -euxo pipefail\n # check that the decision detects that there is a cinder service and\n # does not log that storage collector is skipped\n [ \"$(oc logs -n $NAMESPACE watcher-kuttl-decision-engine-0 |grep -c 'Block storage service is not enabled, skipping storage collector')\" == 0 ]\n ]\n logger.go:42: 07:10:27 | watcher-cinder/2-deploy-cinder | ++ oc logs -n watcher-kuttl-default watcher-kuttl-decision-engine-0\n logger.go:42: 07:10:27 | watcher-cinder/2-deploy-cinder | ++ grep -c 'Block storage service is not enabled, skipping storage collector'\n logger.go:42: 07:10:27 | watcher-cinder/2-deploy-cinder | + '[' 2 == 0 ']'\n logger.go:42: 07:10:28 | watcher-cinder/2-deploy-cinder | running command: [sh -c set -euxo pipefail\n # check that the decision detects that there is a cinder service and\n # does not log that storage collector is skipped\n [ \"$(oc logs -n $NAMESPACE watcher-kuttl-decision-engine-0 |grep -c 'Block storage service is not enabled, skipping storage collector')\" == 0 ]\n ]\n logger.go:42: 07:10:28 | watcher-cinder/2-deploy-cinder | ++ oc logs -n watcher-kuttl-default watcher-kuttl-decision-engine-0\n logger.go:42: 07:10:28 | watcher-cinder/2-deploy-cinder | ++ grep -c 'Block storage service is not enabled, skipping storage collector'\n logger.go:42: 07:10:28 | watcher-cinder/2-deploy-cinder | + '[' 2 == 0 ']'\n logger.go:42: 07:10:29 | watcher-cinder/2-deploy-cinder | running command: [sh -c set -euxo pipefail\n # check that the decision detects that there is a cinder service and\n # does not log that storage collector is skipped\n [ \"$(oc logs -n $NAMESPACE watcher-kuttl-decision-engine-0 |grep -c 'Block storage service is not enabled, skipping storage collector')\" == 0 ]\n ]\n logger.go:42: 07:10:29 | watcher-cinder/2-deploy-cinder | ++ oc logs -n watcher-kuttl-default watcher-kuttl-decision-engine-0\n logger.go:42: 07:10:29 | watcher-cinder/2-deploy-cinder | ++ grep -c 'Block storage service is not enabled, skipping storage collector'\n logger.go:42: 07:10:29 | watcher-cinder/2-deploy-cinder | + '[' 2 == 0 ']'\n logger.go:42: 07:10:30 | watcher-cinder/2-deploy-cinder | running command: [sh -c set -euxo pipefail\n # check that the decision detects that there is a cinder service and\n # does not log that storage collector is skipped\n [ \"$(oc logs -n $NAMESPACE watcher-kuttl-decision-engine-0 |grep -c 'Block storage service is not enabled, skipping storage collector')\" == 0 ]\n ]\n logger.go:42: 07:10:30 | watcher-cinder/2-deploy-cinder | ++ oc logs -n watcher-kuttl-default watcher-kuttl-decision-engine-0\n logger.go:42: 07:10:30 | watcher-cinder/2-deploy-cinder | ++ grep -c 'Block storage service is not enabled, skipping storage collector'\n logger.go:42: 07:10:31 | watcher-cinder/2-deploy-cinder | + '[' 2 == 0 ']'\n logger.go:42: 07:10:32 | watcher-cinder/2-deploy-cinder | running command: [sh -c set -euxo pipefail\n # check that the decision detects that there is a cinder service and\n # does not log that storage collector is skipped\n [ \"$(oc logs -n $NAMESPACE watcher-kuttl-decision-engine-0 |grep -c 'Block storage service is not enabled, skipping storage collector')\" == 0 ]\n ]\n logger.go:42: 07:10:32 | watcher-cinder/2-deploy-cinder | ++ oc logs -n watcher-kuttl-default watcher-kuttl-decision-engine-0\n logger.go:42: 07:10:32 | watcher-cinder/2-deploy-cinder | ++ grep -c 'Block storage service is not enabled, skipping storage collector'\n logger.go:42: 07:10:32 | watcher-cinder/2-deploy-cinder | + '[' 2 == 0 ']'\n logger.go:42: 07:10:33 | watcher-cinder/2-deploy-cinder | running command: [sh -c set -euxo pipefail\n # check that the decision detects that there is a cinder service and\n # does not log that storage collector is skipped\n [ \"$(oc logs -n $NAMESPACE watcher-kuttl-decision-engine-0 |grep -c 'Block storage service is not enabled, skipping storage collector')\" == 0 ]\n ]\n logger.go:42: 07:10:33 | watcher-cinder/2-deploy-cinder | ++ oc logs -n watcher-kuttl-default watcher-kuttl-decision-engine-0\n logger.go:42: 07:10:33 | watcher-cinder/2-deploy-cinder | ++ grep -c 'Block storage service is not enabled, skipping storage collector'\n logger.go:42: 07:10:33 | watcher-cinder/2-deploy-cinder | + '[' 2 == 0 ']'\n logger.go:42: 07:10:34 | watcher-cinder/2-deploy-cinder | running command: [sh -c set -euxo pipefail\n # check that the decision detects that there is a cinder service and\n # does not log that storage collector is skipped\n [ \"$(oc logs -n $NAMESPACE watcher-kuttl-decision-engine-0 |grep -c 'Block storage service is not enabled, skipping storage collector')\" == 0 ]\n ]\n logger.go:42: 07:10:34 | watcher-cinder/2-deploy-cinder | ++ oc logs -n watcher-kuttl-default watcher-kuttl-decision-engine-0\n logger.go:42: 07:10:34 | watcher-cinder/2-deploy-cinder | ++ grep -c 'Block storage service is not enabled, skipping storage collector'\n logger.go:42: 07:10:34 | watcher-cinder/2-deploy-cinder | + '[' 2 == 0 ']'\n logger.go:42: 07:10:35 | watcher-cinder/2-deploy-cinder | running command: [sh -c set -euxo pipefail\n # check that the decision detects that there is a cinder service and\n # does not log that storage collector is skipped\n [ \"$(oc logs -n $NAMESPACE watcher-kuttl-decision-engine-0 |grep -c 'Block storage service is not enabled, skipping storage collector')\" == 0 ]\n ]\n logger.go:42: 07:10:35 | watcher-cinder/2-deploy-cinder | ++ oc logs -n watcher-kuttl-default watcher-kuttl-decision-engine-0\n logger.go:42: 07:10:35 | watcher-cinder/2-deploy-cinder | ++ grep -c 'Block storage service is not enabled, skipping storage collector'\n logger.go:42: 07:10:35 | watcher-cinder/2-deploy-cinder | + '[' 2 == 0 ']'\n logger.go:42: 07:10:36 | watcher-cinder/2-deploy-cinder | running command: [sh -c set -euxo pipefail\n # check that the decision detects that there is a cinder service and\n # does not log that storage collector is skipped\n [ \"$(oc logs -n $NAMESPACE watcher-kuttl-decision-engine-0 |grep -c 'Block storage service is not enabled, skipping storage collector')\" == 0 ]\n ]\n logger.go:42: 07:10:36 | watcher-cinder/2-deploy-cinder | ++ oc logs -n watcher-kuttl-default watcher-kuttl-decision-engine-0\n logger.go:42: 07:10:36 | watcher-cinder/2-deploy-cinder | ++ grep -c 'Block storage service is not enabled, skipping storage collector'\n logger.go:42: 07:10:37 | watcher-cinder/2-deploy-cinder | + '[' 2 == 0 ']'\n logger.go:42: 07:10:38 | watcher-cinder/2-deploy-cinder | running command: [sh -c set -euxo pipefail\n # check that the decision detects that there is a cinder service and\n # does not log that storage collector is skipped\n [ \"$(oc logs -n $NAMESPACE watcher-kuttl-decision-engine-0 |grep -c 'Block storage service is not enabled, skipping storage collector')\" == 0 ]\n ]\n logger.go:42: 07:10:38 | watcher-cinder/2-deploy-cinder | ++ oc logs -n watcher-kuttl-default watcher-kuttl-decision-engine-0\n logger.go:42: 07:10:38 | watcher-cinder/2-deploy-cinder | ++ grep -c 'Block storage service is not enabled, skipping storage collector'\n logger.go:42: 07:10:38 | watcher-cinder/2-deploy-cinder | + '[' 2 == 0 ']'\n logger.go:42: 07:10:39 | watcher-cinder/2-deploy-cinder | running command: [sh -c set -euxo pipefail\n # check that the decision detects that there is a cinder service and\n # does not log that storage collector is skipped\n [ \"$(oc logs -n $NAMESPACE watcher-kuttl-decision-engine-0 |grep -c 'Block storage service is not enabled, skipping storage collector')\" == 0 ]\n ]\n logger.go:42: 07:10:39 | watcher-cinder/2-deploy-cinder | ++ oc logs -n watcher-kuttl-default watcher-kuttl-decision-engine-0\n logger.go:42: 07:10:39 | watcher-cinder/2-deploy-cinder | ++ grep -c 'Block storage service is not enabled, skipping storage collector'\n logger.go:42: 07:10:39 | watcher-cinder/2-deploy-cinder | + '[' 2 == 0 ']'\n logger.go:42: 07:10:40 | watcher-cinder/2-deploy-cinder | running command: [sh -c set -euxo pipefail\n # check that the decision detects that there is a cinder service and\n # does not log that storage collector is skipped\n [ \"$(oc logs -n $NAMESPACE watcher-kuttl-decision-engine-0 |grep -c 'Block storage service is not enabled, skipping storage collector')\" == 0 ]\n ]\n logger.go:42: 07:10:40 | watcher-cinder/2-deploy-cinder | ++ oc logs -n watcher-kuttl-default watcher-kuttl-decision-engine-0\n logger.go:42: 07:10:40 | watcher-cinder/2-deploy-cinder | ++ grep -c 'Block storage service is not enabled, skipping storage collector'\n logger.go:42: 07:10:40 | watcher-cinder/2-deploy-cinder | + '[' 2 == 0 ']'\n logger.go:42: 07:10:41 | watcher-cinder/2-deploy-cinder | running command: [sh -c set -euxo pipefail\n # check that the decision detects that there is a cinder service and\n # does not log that storage collector is skipped\n [ \"$(oc logs -n $NAMESPACE watcher-kuttl-decision-engine-0 |grep -c 'Block storage service is not enabled, skipping storage collector')\" == 0 ]\n ]\n logger.go:42: 07:10:41 | watcher-cinder/2-deploy-cinder | ++ oc logs -n watcher-kuttl-default watcher-kuttl-decision-engine-0\n logger.go:42: 07:10:41 | watcher-cinder/2-deploy-cinder | ++ grep -c 'Block storage service is not enabled, skipping storage collector'\n logger.go:42: 07:10:42 | watcher-cinder/2-deploy-cinder | + '[' 2 == 0 ']'\n logger.go:42: 07:10:43 | watcher-cinder/2-deploy-cinder | running command: [sh -c set -euxo pipefail\n # check that the decision detects that there is a cinder service and\n # does not log that storage collector is skipped\n [ \"$(oc logs -n $NAMESPACE watcher-kuttl-decision-engine-0 |grep -c 'Block storage service is not enabled, skipping storage collector')\" == 0 ]\n ]\n logger.go:42: 07:10:43 | watcher-cinder/2-deploy-cinder | ++ oc logs -n watcher-kuttl-default watcher-kuttl-decision-engine-0\n logger.go:42: 07:10:43 | watcher-cinder/2-deploy-cinder | ++ grep -c 'Block storage service is not enabled, skipping storage collector'\n logger.go:42: 07:10:43 | watcher-cinder/2-deploy-cinder | + '[' 2 == 0 ']'\n logger.go:42: 07:10:44 | watcher-cinder/2-deploy-cinder | running command: [sh -c set -euxo pipefail\n # check that the decision detects that there is a cinder service and\n # does not log that storage collector is skipped\n [ \"$(oc logs -n $NAMESPACE watcher-kuttl-decision-engine-0 |grep -c 'Block storage service is not enabled, skipping storage collector')\" == 0 ]\n ]\n logger.go:42: 07:10:44 | watcher-cinder/2-deploy-cinder | ++ oc logs -n watcher-kuttl-default watcher-kuttl-decision-engine-0\n logger.go:42: 07:10:44 | watcher-cinder/2-deploy-cinder | ++ grep -c 'Block storage service is not enabled, skipping storage collector'\n logger.go:42: 07:10:44 | watcher-cinder/2-deploy-cinder | + '[' 2 == 0 ']'\n logger.go:42: 07:10:45 | watcher-cinder/2-deploy-cinder | running command: [sh -c set -euxo pipefail\n # check that the decision detects that there is a cinder service and\n # does not log that storage collector is skipped\n [ \"$(oc logs -n $NAMESPACE watcher-kuttl-decision-engine-0 |grep -c 'Block storage service is not enabled, skipping storage collector')\" == 0 ]\n ]\n logger.go:42: 07:10:45 | watcher-cinder/2-deploy-cinder | ++ oc logs -n watcher-kuttl-default watcher-kuttl-decision-engine-0\n logger.go:42: 07:10:45 | watcher-cinder/2-deploy-cinder | ++ grep -c 'Block storage service is not enabled, skipping storage collector'\n logger.go:42: 07:10:45 | watcher-cinder/2-deploy-cinder | + '[' 2 == 0 ']'\n logger.go:42: 07:10:47 | watcher-cinder/2-deploy-cinder | running command: [sh -c set -euxo pipefail\n # check that the decision detects that there is a cinder service and\n # does not log that storage collector is skipped\n [ \"$(oc logs -n $NAMESPACE watcher-kuttl-decision-engine-0 |grep -c 'Block storage service is not enabled, skipping storage collector')\" == 0 ]\n ]\n logger.go:42: 07:10:47 | watcher-cinder/2-deploy-cinder | ++ oc logs -n watcher-kuttl-default watcher-kuttl-decision-engine-0\n logger.go:42: 07:10:47 | watcher-cinder/2-deploy-cinder | ++ grep -c 'Block storage service is not enabled, skipping storage collector'\n logger.go:42: 07:10:47 | watcher-cinder/2-deploy-cinder | + '[' 2 == 0 ']'\n logger.go:42: 07:10:48 | watcher-cinder/2-deploy-cinder | running command: [sh -c set -euxo pipefail\n # check that the decision detects that there is a cinder service and\n # does not log that storage collector is skipped\n [ \"$(oc logs -n $NAMESPACE watcher-kuttl-decision-engine-0 |grep -c 'Block storage service is not enabled, skipping storage collector')\" == 0 ]\n ]\n logger.go:42: 07:10:48 | watcher-cinder/2-deploy-cinder | ++ oc logs -n watcher-kuttl-default watcher-kuttl-decision-engine-0\n logger.go:42: 07:10:48 | watcher-cinder/2-deploy-cinder | ++ grep -c 'Block storage service is not enabled, skipping storage collector'\n logger.go:42: 07:10:48 | watcher-cinder/2-deploy-cinder | + '[' 2 == 0 ']'\n logger.go:42: 07:10:49 | watcher-cinder/2-deploy-cinder | running command: [sh -c set -euxo pipefail\n # check that the decision detects that there is a cinder service and\n # does not log that storage collector is skipped\n [ \"$(oc logs -n $NAMESPACE watcher-kuttl-decision-engine-0 |grep -c 'Block storage service is not enabled, skipping storage collector')\" == 0 ]\n ]\n logger.go:42: 07:10:49 | watcher-cinder/2-deploy-cinder | ++ oc logs -n watcher-kuttl-default watcher-kuttl-decision-engine-0\n logger.go:42: 07:10:49 | watcher-cinder/2-deploy-cinder | ++ grep -c 'Block storage service is not enabled, skipping storage collector'\n logger.go:42: 07:10:49 | watcher-cinder/2-deploy-cinder | + '[' 2 == 0 ']'\n logger.go:42: 07:10:50 | watcher-cinder/2-deploy-cinder | running command: [sh -c set -euxo pipefail\n # check that the decision detects that there is a cinder service and\n # does not log that storage collector is skipped\n [ \"$(oc logs -n $NAMESPACE watcher-kuttl-decision-engine-0 |grep -c 'Block storage service is not enabled, skipping storage collector')\" == 0 ]\n ]\n logger.go:42: 07:10:50 | watcher-cinder/2-deploy-cinder | ++ oc logs -n watcher-kuttl-default watcher-kuttl-decision-engine-0\n logger.go:42: 07:10:50 | watcher-cinder/2-deploy-cinder | ++ grep -c 'Block storage service is not enabled, skipping storage collector'\n logger.go:42: 07:10:50 | watcher-cinder/2-deploy-cinder | + '[' 2 == 0 ']'\n logger.go:42: 07:10:51 | watcher-cinder/2-deploy-cinder | running command: [sh -c set -euxo pipefail\n # check that the decision detects that there is a cinder service and\n # does not log that storage collector is skipped\n [ \"$(oc logs -n $NAMESPACE watcher-kuttl-decision-engine-0 |grep -c 'Block storage service is not enabled, skipping storage collector')\" == 0 ]\n ]\n logger.go:42: 07:10:51 | watcher-cinder/2-deploy-cinder | ++ oc logs -n watcher-kuttl-default watcher-kuttl-decision-engine-0\n logger.go:42: 07:10:51 | watcher-cinder/2-deploy-cinder | ++ grep -c 'Block storage service is not enabled, skipping storage collector'\n logger.go:42: 07:10:51 | watcher-cinder/2-deploy-cinder | + '[' 2 == 0 ']'\n logger.go:42: 07:10:53 | watcher-cinder/2-deploy-cinder | running command: [sh -c set -euxo pipefail\n # check that the decision detects that there is a cinder service and\n # does not log that storage collector is skipped\n [ \"$(oc logs -n $NAMESPACE watcher-kuttl-decision-engine-0 |grep -c 'Block storage service is not enabled, skipping storage collector')\" == 0 ]\n ]\n logger.go:42: 07:10:53 | watcher-cinder/2-deploy-cinder | ++ oc logs -n watcher-kuttl-default watcher-kuttl-decision-engine-0\n logger.go:42: 07:10:53 | watcher-cinder/2-deploy-cinder | ++ grep -c 'Block storage service is not enabled, skipping storage collector'\n logger.go:42: 07:10:53 | watcher-cinder/2-deploy-cinder | + '[' 2 == 0 ']'\n logger.go:42: 07:10:54 | watcher-cinder/2-deploy-cinder | running command: [sh -c set -euxo pipefail\n # check that the decision detects that there is a cinder service and\n # does not log that storage collector is skipped\n [ \"$(oc logs -n $NAMESPACE watcher-kuttl-decision-engine-0 |grep -c 'Block storage service is not enabled, skipping storage collector')\" == 0 ]\n ]\n logger.go:42: 07:10:54 | watcher-cinder/2-deploy-cinder | ++ oc logs -n watcher-kuttl-default watcher-kuttl-decision-engine-0\n logger.go:42: 07:10:54 | watcher-cinder/2-deploy-cinder | ++ grep -c 'Block storage service is not enabled, skipping storage collector'\n logger.go:42: 07:10:54 | watcher-cinder/2-deploy-cinder | + '[' 2 == 0 ']'\n logger.go:42: 07:10:55 | watcher-cinder/2-deploy-cinder | running command: [sh -c set -euxo pipefail\n # check that the decision detects that there is a cinder service and\n # does not log that storage collector is skipped\n [ \"$(oc logs -n $NAMESPACE watcher-kuttl-decision-engine-0 |grep -c 'Block storage service is not enabled, skipping storage collector')\" == 0 ]\n ]\n logger.go:42: 07:10:55 | watcher-cinder/2-deploy-cinder | ++ oc logs -n watcher-kuttl-default watcher-kuttl-decision-engine-0\n logger.go:42: 07:10:55 | watcher-cinder/2-deploy-cinder | ++ grep -c 'Block storage service is not enabled, skipping storage collector'\n logger.go:42: 07:10:55 | watcher-cinder/2-deploy-cinder | + '[' 2 == 0 ']'\n logger.go:42: 07:10:56 | watcher-cinder/2-deploy-cinder | running command: [sh -c set -euxo pipefail\n # check that the decision detects that there is a cinder service and\n # does not log that storage collector is skipped\n [ \"$(oc logs -n $NAMESPACE watcher-kuttl-decision-engine-0 |grep -c 'Block storage service is not enabled, skipping storage collector')\" == 0 ]\n ]\n logger.go:42: 07:10:56 | watcher-cinder/2-deploy-cinder | ++ oc logs -n watcher-kuttl-default watcher-kuttl-decision-engine-0\n logger.go:42: 07:10:56 | watcher-cinder/2-deploy-cinder | ++ grep -c 'Block storage service is not enabled, skipping storage collector'\n logger.go:42: 07:10:56 | watcher-cinder/2-deploy-cinder | Error from server (BadRequest): container \"watcher-decision-engine\" in pod \"watcher-kuttl-decision-engine-0\" is waiting to start: ContainerCreating\n logger.go:42: 07:10:56 | watcher-cinder/2-deploy-cinder | + '[' 0 == 0 ']'\n logger.go:42: 07:10:57 | watcher-cinder/2-deploy-cinder | running command: [sh -c set -euxo pipefail\n # check that the decision detects that there is a cinder service and\n # does not log that storage collector is skipped\n [ \"$(oc logs -n $NAMESPACE watcher-kuttl-decision-engine-0 |grep -c 'Block storage service is not enabled, skipping storage collector')\" == 0 ]\n ]\n logger.go:42: 07:10:57 | watcher-cinder/2-deploy-cinder | ++ oc logs -n watcher-kuttl-default watcher-kuttl-decision-engine-0\n logger.go:42: 07:10:57 | watcher-cinder/2-deploy-cinder | ++ grep -c 'Block storage service is not enabled, skipping storage collector'\n logger.go:42: 07:10:58 | watcher-cinder/2-deploy-cinder | Error from server (BadRequest): container \"watcher-decision-engine\" in pod \"watcher-kuttl-decision-engine-0\" is waiting to start: ContainerCreating\n logger.go:42: 07:10:58 | watcher-cinder/2-deploy-cinder | + '[' 0 == 0 ']'\n logger.go:42: 07:10:59 | watcher-cinder/2-deploy-cinder | running command: [sh -c set -euxo pipefail\n # check that the decision detects that there is a cinder service and\n # does not log that storage collector is skipped\n [ \"$(oc logs -n $NAMESPACE watcher-kuttl-decision-engine-0 |grep -c 'Block storage service is not enabled, skipping storage collector')\" == 0 ]\n ]\n logger.go:42: 07:10:59 | watcher-cinder/2-deploy-cinder | ++ oc logs -n watcher-kuttl-default watcher-kuttl-decision-engine-0\n logger.go:42: 07:10:59 | watcher-cinder/2-deploy-cinder | ++ grep -c 'Block storage service is not enabled, skipping storage collector'\n logger.go:42: 07:10:59 | watcher-cinder/2-deploy-cinder | + '[' 0 == 0 ']'\n logger.go:42: 07:11:00 | watcher-cinder/2-deploy-cinder | running command: [sh -c set -euxo pipefail\n # check that the decision detects that there is a cinder service and\n # does not log that storage collector is skipped\n [ \"$(oc logs -n $NAMESPACE watcher-kuttl-decision-engine-0 |grep -c 'Block storage service is not enabled, skipping storage collector')\" == 0 ]\n ]\n logger.go:42: 07:11:00 | watcher-cinder/2-deploy-cinder | ++ oc logs -n watcher-kuttl-default watcher-kuttl-decision-engine-0\n logger.go:42: 07:11:00 | watcher-cinder/2-deploy-cinder | ++ grep -c 'Block storage service is not enabled, skipping storage collector'\n logger.go:42: 07:11:00 | watcher-cinder/2-deploy-cinder | + '[' 0 == 0 ']'\n logger.go:42: 07:11:01 | watcher-cinder/2-deploy-cinder | running command: [sh -c set -euxo pipefail\n # check that the decision detects that there is a cinder service and\n # does not log that storage collector is skipped\n [ \"$(oc logs -n $NAMESPACE watcher-kuttl-decision-engine-0 |grep -c 'Block storage service is not enabled, skipping storage collector')\" == 0 ]\n ]\n logger.go:42: 07:11:01 | watcher-cinder/2-deploy-cinder | ++ oc logs -n watcher-kuttl-default watcher-kuttl-decision-engine-0\n logger.go:42: 07:11:01 | watcher-cinder/2-deploy-cinder | ++ grep -c 'Block storage service is not enabled, skipping storage collector'\n logger.go:42: 07:11:01 | watcher-cinder/2-deploy-cinder | + '[' 0 == 0 ']'\n logger.go:42: 07:11:02 | watcher-cinder/2-deploy-cinder | running command: [sh -c set -euxo pipefail\n # check that the decision detects that there is a cinder service and\n # does not log that storage collector is skipped\n [ \"$(oc logs -n $NAMESPACE watcher-kuttl-decision-engine-0 |grep -c 'Block storage service is not enabled, skipping storage collector')\" == 0 ]\n ]\n logger.go:42: 07:11:02 | watcher-cinder/2-deploy-cinder | ++ oc logs -n watcher-kuttl-default watcher-kuttl-decision-engine-0\n logger.go:42: 07:11:02 | watcher-cinder/2-deploy-cinder | ++ grep -c 'Block storage service is not enabled, skipping storage collector'\n logger.go:42: 07:11:02 | watcher-cinder/2-deploy-cinder | + '[' 0 == 0 ']'\n logger.go:42: 07:11:03 | watcher-cinder/2-deploy-cinder | running command: [sh -c set -euxo pipefail\n # check that the decision detects that there is a cinder service and\n # does not log that storage collector is skipped\n [ \"$(oc logs -n $NAMESPACE watcher-kuttl-decision-engine-0 |grep -c 'Block storage service is not enabled, skipping storage collector')\" == 0 ]\n ]\n logger.go:42: 07:11:03 | watcher-cinder/2-deploy-cinder | ++ oc logs -n watcher-kuttl-default watcher-kuttl-decision-engine-0\n logger.go:42: 07:11:03 | watcher-cinder/2-deploy-cinder | ++ grep -c 'Block storage service is not enabled, skipping storage collector'\n logger.go:42: 07:11:04 | watcher-cinder/2-deploy-cinder | + '[' 0 == 0 ']'\n logger.go:42: 07:11:05 | watcher-cinder/2-deploy-cinder | running command: [sh -c set -euxo pipefail\n # check that the decision detects that there is a cinder service and\n # does not log that storage collector is skipped\n [ \"$(oc logs -n $NAMESPACE watcher-kuttl-decision-engine-0 |grep -c 'Block storage service is not enabled, skipping storage collector')\" == 0 ]\n ]\n logger.go:42: 07:11:05 | watcher-cinder/2-deploy-cinder | ++ oc logs -n watcher-kuttl-default watcher-kuttl-decision-engine-0\n logger.go:42: 07:11:05 | watcher-cinder/2-deploy-cinder | ++ grep -c 'Block storage service is not enabled, skipping storage collector'\n logger.go:42: 07:11:05 | watcher-cinder/2-deploy-cinder | + '[' 0 == 0 ']'\n logger.go:42: 07:11:06 | watcher-cinder/2-deploy-cinder | running command: [sh -c set -euxo pipefail\n # check that the decision detects that there is a cinder service and\n # does not log that storage collector is skipped\n [ \"$(oc logs -n $NAMESPACE watcher-kuttl-decision-engine-0 |grep -c 'Block storage service is not enabled, skipping storage collector')\" == 0 ]\n ]\n logger.go:42: 07:11:06 | watcher-cinder/2-deploy-cinder | ++ oc logs -n watcher-kuttl-default watcher-kuttl-decision-engine-0\n logger.go:42: 07:11:06 | watcher-cinder/2-deploy-cinder | ++ grep -c 'Block storage service is not enabled, skipping storage collector'\n logger.go:42: 07:11:06 | watcher-cinder/2-deploy-cinder | + '[' 0 == 0 ']'\n logger.go:42: 07:11:07 | watcher-cinder/2-deploy-cinder | running command: [sh -c set -euxo pipefail\n # check that the decision detects that there is a cinder service and\n # does not log that storage collector is skipped\n [ \"$(oc logs -n $NAMESPACE watcher-kuttl-decision-engine-0 |grep -c 'Block storage service is not enabled, skipping storage collector')\" == 0 ]\n ]\n logger.go:42: 07:11:07 | watcher-cinder/2-deploy-cinder | ++ oc logs -n watcher-kuttl-default watcher-kuttl-decision-engine-0\n logger.go:42: 07:11:07 | watcher-cinder/2-deploy-cinder | ++ grep -c 'Block storage service is not enabled, skipping storage collector'\n logger.go:42: 07:11:07 | watcher-cinder/2-deploy-cinder | + '[' 0 == 0 ']'\n logger.go:42: 07:11:08 | watcher-cinder/2-deploy-cinder | running command: [sh -c set -euxo pipefail\n # check that the decision detects that there is a cinder service and\n # does not log that storage collector is skipped\n [ \"$(oc logs -n $NAMESPACE watcher-kuttl-decision-engine-0 |grep -c 'Block storage service is not enabled, skipping storage collector')\" == 0 ]\n ]\n logger.go:42: 07:11:08 | watcher-cinder/2-deploy-cinder | ++ oc logs -n watcher-kuttl-default watcher-kuttl-decision-engine-0\n logger.go:42: 07:11:08 | watcher-cinder/2-deploy-cinder | ++ grep -c 'Block storage service is not enabled, skipping storage collector'\n logger.go:42: 07:11:09 | watcher-cinder/2-deploy-cinder | + '[' 0 == 0 ']'\n logger.go:42: 07:11:09 | watcher-cinder/2-deploy-cinder | test step completed 2-deploy-cinder\n logger.go:42: 07:11:09 | watcher-cinder/3-remove-cinder | starting test step 3-remove-cinder\n logger.go:42: 07:11:09 | watcher-cinder/3-remove-cinder | OpenStackControlPlane:watcher-kuttl-default/openstack updated\n logger.go:42: 07:11:09 | watcher-cinder/3-remove-cinder | running command: [sh -c set -euxo pipefail\n \ # check that the decision engine correctly detects that there is no cinder service\n [ \"$(oc logs -n $NAMESPACE watcher-kuttl-decision-engine-0 |grep -c 'Block storage service is not enabled, skipping storage collector')\" == 2 ]\n \ ]\n logger.go:42: 07:11:09 | watcher-cinder/3-remove-cinder | ++ oc logs -n watcher-kuttl-default watcher-kuttl-decision-engine-0\n logger.go:42: 07:11:09 | watcher-cinder/3-remove-cinder | ++ grep -c 'Block storage service is not enabled, skipping storage collector'\n logger.go:42: 07:11:09 | watcher-cinder/3-remove-cinder | + '[' 0 == 2 ']'\n logger.go:42: 07:11:10 | watcher-cinder/3-remove-cinder | running command: [sh -c set -euxo pipefail\n # check that the decision engine correctly detects that there is no cinder service\n [ \"$(oc logs -n $NAMESPACE watcher-kuttl-decision-engine-0 |grep -c 'Block storage service is not enabled, skipping storage collector')\" == 2 ]\n ]\n logger.go:42: 07:11:10 | watcher-cinder/3-remove-cinder | ++ oc logs -n watcher-kuttl-default watcher-kuttl-decision-engine-0\n logger.go:42: 07:11:10 | watcher-cinder/3-remove-cinder | ++ grep -c 'Block storage service is not enabled, skipping storage collector'\n \ logger.go:42: 07:11:10 | watcher-cinder/3-remove-cinder | + '[' 0 == 2 ']'\n \ logger.go:42: 07:11:11 | watcher-cinder/3-remove-cinder | running command: [sh -c set -euxo pipefail\n # check that the decision engine correctly detects that there is no cinder service\n [ \"$(oc logs -n $NAMESPACE watcher-kuttl-decision-engine-0 |grep -c 'Block storage service is not enabled, skipping storage collector')\" == 2 ]\n ]\n logger.go:42: 07:11:11 | watcher-cinder/3-remove-cinder | ++ oc logs -n watcher-kuttl-default watcher-kuttl-decision-engine-0\n logger.go:42: 07:11:11 | watcher-cinder/3-remove-cinder | ++ grep -c 'Block storage service is not enabled, skipping storage collector'\n logger.go:42: 07:11:11 | watcher-cinder/3-remove-cinder | + '[' 0 == 2 ']'\n logger.go:42: 07:11:12 | watcher-cinder/3-remove-cinder | running command: [sh -c set -euxo pipefail\n # check that the decision engine correctly detects that there is no cinder service\n [ \"$(oc logs -n $NAMESPACE watcher-kuttl-decision-engine-0 |grep -c 'Block storage service is not enabled, skipping storage collector')\" == 2 ]\n ]\n logger.go:42: 07:11:12 | watcher-cinder/3-remove-cinder | ++ oc logs -n watcher-kuttl-default watcher-kuttl-decision-engine-0\n logger.go:42: 07:11:12 | watcher-cinder/3-remove-cinder | ++ grep -c 'Block storage service is not enabled, skipping storage collector'\n \ logger.go:42: 07:11:13 | watcher-cinder/3-remove-cinder | + '[' 0 == 2 ']'\n \ logger.go:42: 07:11:14 | watcher-cinder/3-remove-cinder | running command: [sh -c set -euxo pipefail\n # check that the decision engine correctly detects that there is no cinder service\n [ \"$(oc logs -n $NAMESPACE watcher-kuttl-decision-engine-0 |grep -c 'Block storage service is not enabled, skipping storage collector')\" == 2 ]\n ]\n logger.go:42: 07:11:14 | watcher-cinder/3-remove-cinder | ++ oc logs -n watcher-kuttl-default watcher-kuttl-decision-engine-0\n logger.go:42: 07:11:14 | watcher-cinder/3-remove-cinder | ++ grep -c 'Block storage service is not enabled, skipping storage collector'\n logger.go:42: 07:11:14 | watcher-cinder/3-remove-cinder | + '[' 0 == 2 ']'\n logger.go:42: 07:11:15 | watcher-cinder/3-remove-cinder | running command: [sh -c set -euxo pipefail\n # check that the decision engine correctly detects that there is no cinder service\n [ \"$(oc logs -n $NAMESPACE watcher-kuttl-decision-engine-0 |grep -c 'Block storage service is not enabled, skipping storage collector')\" == 2 ]\n ]\n logger.go:42: 07:11:15 | watcher-cinder/3-remove-cinder | ++ oc logs -n watcher-kuttl-default watcher-kuttl-decision-engine-0\n logger.go:42: 07:11:15 | watcher-cinder/3-remove-cinder | ++ grep -c 'Block storage service is not enabled, skipping storage collector'\n \ logger.go:42: 07:11:15 | watcher-cinder/3-remove-cinder | + '[' 0 == 2 ']'\n \ logger.go:42: 07:11:16 | watcher-cinder/3-remove-cinder | running command: [sh -c set -euxo pipefail\n # check that the decision engine correctly detects that there is no cinder service\n [ \"$(oc logs -n $NAMESPACE watcher-kuttl-decision-engine-0 |grep -c 'Block storage service is not enabled, skipping storage collector')\" == 2 ]\n ]\n logger.go:42: 07:11:16 | watcher-cinder/3-remove-cinder | ++ oc logs -n watcher-kuttl-default watcher-kuttl-decision-engine-0\n logger.go:42: 07:11:16 | watcher-cinder/3-remove-cinder | ++ grep -c 'Block storage service is not enabled, skipping storage collector'\n logger.go:42: 07:11:16 | watcher-cinder/3-remove-cinder | + '[' 0 == 2 ']'\n logger.go:42: 07:11:17 | watcher-cinder/3-remove-cinder | running command: [sh -c set -euxo pipefail\n # check that the decision engine correctly detects that there is no cinder service\n [ \"$(oc logs -n $NAMESPACE watcher-kuttl-decision-engine-0 |grep -c 'Block storage service is not enabled, skipping storage collector')\" == 2 ]\n ]\n logger.go:42: 07:11:17 | watcher-cinder/3-remove-cinder | ++ oc logs -n watcher-kuttl-default watcher-kuttl-decision-engine-0\n logger.go:42: 07:11:17 | watcher-cinder/3-remove-cinder | ++ grep -c 'Block storage service is not enabled, skipping storage collector'\n \ logger.go:42: 07:11:17 | watcher-cinder/3-remove-cinder | + '[' 0 == 2 ']'\n \ logger.go:42: 07:11:18 | watcher-cinder/3-remove-cinder | running command: [sh -c set -euxo pipefail\n # check that the decision engine correctly detects that there is no cinder service\n [ \"$(oc logs -n $NAMESPACE watcher-kuttl-decision-engine-0 |grep -c 'Block storage service is not enabled, skipping storage collector')\" == 2 ]\n ]\n logger.go:42: 07:11:18 | watcher-cinder/3-remove-cinder | ++ oc logs -n watcher-kuttl-default watcher-kuttl-decision-engine-0\n logger.go:42: 07:11:18 | watcher-cinder/3-remove-cinder | ++ grep -c 'Block storage service is not enabled, skipping storage collector'\n logger.go:42: 07:11:19 | watcher-cinder/3-remove-cinder | + '[' 0 == 2 ']'\n logger.go:42: 07:11:20 | watcher-cinder/3-remove-cinder | running command: [sh -c set -euxo pipefail\n # check that the decision engine correctly detects that there is no cinder service\n [ \"$(oc logs -n $NAMESPACE watcher-kuttl-decision-engine-0 |grep -c 'Block storage service is not enabled, skipping storage collector')\" == 2 ]\n ]\n logger.go:42: 07:11:20 | watcher-cinder/3-remove-cinder | ++ oc logs -n watcher-kuttl-default watcher-kuttl-decision-engine-0\n logger.go:42: 07:11:20 | watcher-cinder/3-remove-cinder | ++ grep -c 'Block storage service is not enabled, skipping storage collector'\n \ logger.go:42: 07:11:20 | watcher-cinder/3-remove-cinder | + '[' 0 == 2 ']'\n \ logger.go:42: 07:11:21 | watcher-cinder/3-remove-cinder | running command: [sh -c set -euxo pipefail\n # check that the decision engine correctly detects that there is no cinder service\n [ \"$(oc logs -n $NAMESPACE watcher-kuttl-decision-engine-0 |grep -c 'Block storage service is not enabled, skipping storage collector')\" == 2 ]\n ]\n logger.go:42: 07:11:21 | watcher-cinder/3-remove-cinder | ++ oc logs -n watcher-kuttl-default watcher-kuttl-decision-engine-0\n logger.go:42: 07:11:21 | watcher-cinder/3-remove-cinder | ++ grep -c 'Block storage service is not enabled, skipping storage collector'\n logger.go:42: 07:11:21 | watcher-cinder/3-remove-cinder | + '[' 0 == 2 ']'\n logger.go:42: 07:11:22 | watcher-cinder/3-remove-cinder | running command: [sh -c set -euxo pipefail\n # check that the decision engine correctly detects that there is no cinder service\n [ \"$(oc logs -n $NAMESPACE watcher-kuttl-decision-engine-0 |grep -c 'Block storage service is not enabled, skipping storage collector')\" == 2 ]\n ]\n logger.go:42: 07:11:22 | watcher-cinder/3-remove-cinder | ++ oc logs -n watcher-kuttl-default watcher-kuttl-decision-engine-0\n logger.go:42: 07:11:22 | watcher-cinder/3-remove-cinder | ++ grep -c 'Block storage service is not enabled, skipping storage collector'\n \ logger.go:42: 07:11:22 | watcher-cinder/3-remove-cinder | + '[' 0 == 2 ']'\n \ logger.go:42: 07:11:23 | watcher-cinder/3-remove-cinder | running command: [sh -c set -euxo pipefail\n # check that the decision engine correctly detects that there is no cinder service\n [ \"$(oc logs -n $NAMESPACE watcher-kuttl-decision-engine-0 |grep -c 'Block storage service is not enabled, skipping storage collector')\" == 2 ]\n ]\n logger.go:42: 07:11:23 | watcher-cinder/3-remove-cinder | ++ oc logs -n watcher-kuttl-default watcher-kuttl-decision-engine-0\n logger.go:42: 07:11:23 | watcher-cinder/3-remove-cinder | ++ grep -c 'Block storage service is not enabled, skipping storage collector'\n logger.go:42: 07:11:23 | watcher-cinder/3-remove-cinder | Error from server (BadRequest): container \"watcher-decision-engine\" in pod \"watcher-kuttl-decision-engine-0\" is waiting to start: ContainerCreating\n logger.go:42: 07:11:23 | watcher-cinder/3-remove-cinder | + '[' 0 == 2 ']'\n logger.go:42: 07:11:24 | watcher-cinder/3-remove-cinder | running command: [sh -c set -euxo pipefail\n # check that the decision engine correctly detects that there is no cinder service\n [ \"$(oc logs -n $NAMESPACE watcher-kuttl-decision-engine-0 |grep -c 'Block storage service is not enabled, skipping storage collector')\" == 2 ]\n ]\n logger.go:42: 07:11:24 | watcher-cinder/3-remove-cinder | ++ oc logs -n watcher-kuttl-default watcher-kuttl-decision-engine-0\n logger.go:42: 07:11:24 | watcher-cinder/3-remove-cinder | ++ grep -c 'Block storage service is not enabled, skipping storage collector'\n \ logger.go:42: 07:11:25 | watcher-cinder/3-remove-cinder | + '[' 0 == 2 ']'\n \ logger.go:42: 07:11:26 | watcher-cinder/3-remove-cinder | running command: [sh -c set -euxo pipefail\n # check that the decision engine correctly detects that there is no cinder service\n [ \"$(oc logs -n $NAMESPACE watcher-kuttl-decision-engine-0 |grep -c 'Block storage service is not enabled, skipping storage collector')\" == 2 ]\n ]\n logger.go:42: 07:11:26 | watcher-cinder/3-remove-cinder | ++ oc logs -n watcher-kuttl-default watcher-kuttl-decision-engine-0\n logger.go:42: 07:11:26 | watcher-cinder/3-remove-cinder | ++ grep -c 'Block storage service is not enabled, skipping storage collector'\n logger.go:42: 07:11:26 | watcher-cinder/3-remove-cinder | + '[' 0 == 2 ']'\n logger.go:42: 07:11:27 | watcher-cinder/3-remove-cinder | running command: [sh -c set -euxo pipefail\n # check that the decision engine correctly detects that there is no cinder service\n [ \"$(oc logs -n $NAMESPACE watcher-kuttl-decision-engine-0 |grep -c 'Block storage service is not enabled, skipping storage collector')\" == 2 ]\n ]\n logger.go:42: 07:11:27 | watcher-cinder/3-remove-cinder | ++ oc logs -n watcher-kuttl-default watcher-kuttl-decision-engine-0\n logger.go:42: 07:11:27 | watcher-cinder/3-remove-cinder | ++ grep -c 'Block storage service is not enabled, skipping storage collector'\n \ logger.go:42: 07:11:27 | watcher-cinder/3-remove-cinder | + '[' 1 == 2 ']'\n \ logger.go:42: 07:11:28 | watcher-cinder/3-remove-cinder | running command: [sh -c set -euxo pipefail\n # check that the decision engine correctly detects that there is no cinder service\n [ \"$(oc logs -n $NAMESPACE watcher-kuttl-decision-engine-0 |grep -c 'Block storage service is not enabled, skipping storage collector')\" == 2 ]\n ]\n logger.go:42: 07:11:28 | watcher-cinder/3-remove-cinder | ++ oc logs -n watcher-kuttl-default watcher-kuttl-decision-engine-0\n logger.go:42: 07:11:28 | watcher-cinder/3-remove-cinder | ++ grep -c 'Block storage service is not enabled, skipping storage collector'\n logger.go:42: 07:11:28 | watcher-cinder/3-remove-cinder | + '[' 2 == 2 ']'\n logger.go:42: 07:11:29 | watcher-cinder/3-remove-cinder | running command: [sh -c set -euxo pipefail\n # check that the decision engine correctly detects that there is no cinder service\n [ \"$(oc logs -n $NAMESPACE watcher-kuttl-decision-engine-0 |grep -c 'Block storage service is not enabled, skipping storage collector')\" == 2 ]\n ]\n logger.go:42: 07:11:29 | watcher-cinder/3-remove-cinder | ++ oc logs -n watcher-kuttl-default watcher-kuttl-decision-engine-0\n logger.go:42: 07:11:29 | watcher-cinder/3-remove-cinder | ++ grep -c 'Block storage service is not enabled, skipping storage collector'\n \ logger.go:42: 07:11:30 | watcher-cinder/3-remove-cinder | + '[' 2 == 2 ']'\n \ logger.go:42: 07:11:31 | watcher-cinder/3-remove-cinder | running command: [sh -c set -euxo pipefail\n # check that the decision engine correctly detects that there is no cinder service\n [ \"$(oc logs -n $NAMESPACE watcher-kuttl-decision-engine-0 |grep -c 'Block storage service is not enabled, skipping storage collector')\" == 2 ]\n ]\n logger.go:42: 07:11:31 | watcher-cinder/3-remove-cinder | ++ oc logs -n watcher-kuttl-default watcher-kuttl-decision-engine-0\n logger.go:42: 07:11:31 | watcher-cinder/3-remove-cinder | ++ grep -c 'Block storage service is not enabled, skipping storage collector'\n logger.go:42: 07:11:31 | watcher-cinder/3-remove-cinder | + '[' 2 == 2 ']'\n logger.go:42: 07:11:32 | watcher-cinder/3-remove-cinder | running command: [sh -c set -euxo pipefail\n # check that the decision engine correctly detects that there is no cinder service\n [ \"$(oc logs -n $NAMESPACE watcher-kuttl-decision-engine-0 |grep -c 'Block storage service is not enabled, skipping storage collector')\" == 2 ]\n ]\n logger.go:42: 07:11:32 | watcher-cinder/3-remove-cinder | ++ oc logs -n watcher-kuttl-default watcher-kuttl-decision-engine-0\n logger.go:42: 07:11:32 | watcher-cinder/3-remove-cinder | ++ grep -c 'Block storage service is not enabled, skipping storage collector'\n \ logger.go:42: 07:11:32 | watcher-cinder/3-remove-cinder | + '[' 2 == 2 ']'\n \ logger.go:42: 07:11:33 | watcher-cinder/3-remove-cinder | running command: [sh -c set -euxo pipefail\n # check that the decision engine correctly detects that there is no cinder service\n [ \"$(oc logs -n $NAMESPACE watcher-kuttl-decision-engine-0 |grep -c 'Block storage service is not enabled, skipping storage collector')\" == 2 ]\n ]\n logger.go:42: 07:11:33 | watcher-cinder/3-remove-cinder | ++ oc logs -n watcher-kuttl-default watcher-kuttl-decision-engine-0\n logger.go:42: 07:11:33 | watcher-cinder/3-remove-cinder | ++ grep -c 'Block storage service is not enabled, skipping storage collector'\n logger.go:42: 07:11:33 | watcher-cinder/3-remove-cinder | + '[' 2 == 2 ']'\n logger.go:42: 07:11:34 | watcher-cinder/3-remove-cinder | running command: [sh -c set -euxo pipefail\n # check that the decision engine correctly detects that there is no cinder service\n [ \"$(oc logs -n $NAMESPACE watcher-kuttl-decision-engine-0 |grep -c 'Block storage service is not enabled, skipping storage collector')\" == 2 ]\n ]\n logger.go:42: 07:11:34 | watcher-cinder/3-remove-cinder | ++ oc logs -n watcher-kuttl-default watcher-kuttl-decision-engine-0\n logger.go:42: 07:11:34 | watcher-cinder/3-remove-cinder | ++ grep -c 'Block storage service is not enabled, skipping storage collector'\n \ logger.go:42: 07:11:34 | watcher-cinder/3-remove-cinder | + '[' 2 == 2 ']'\n \ logger.go:42: 07:11:34 | watcher-cinder/3-remove-cinder | test step completed 3-remove-cinder\n logger.go:42: 07:11:34 | watcher-cinder/4-cleanup-watcher | starting test step 4-cleanup-watcher\n logger.go:42: 07:11:43 | watcher-cinder/4-cleanup-watcher | test step completed 4-cleanup-watcher\n logger.go:42: 07:11:43 | watcher-cinder | skipping kubernetes event logging\n=== CONT kuttl/harness/watcher\n logger.go:42: 07:11:43 | watcher | Skipping creation of user-supplied namespace: watcher-kuttl-default\n \ logger.go:42: 07:11:43 | watcher/0-cleanup-watcher | starting test step 0-cleanup-watcher\n \ logger.go:42: 07:11:43 | watcher/0-cleanup-watcher | test step completed 0-cleanup-watcher\n \ logger.go:42: 07:11:43 | watcher/1-deploy-with-defaults | starting test step 1-deploy-with-defaults\n logger.go:42: 07:11:43 | watcher/1-deploy-with-defaults | Watcher:watcher-kuttl-default/watcher-kuttl created\n logger.go:42: 07:11:43 | watcher/1-deploy-with-defaults | running command: [sh -c set -euxo pipefail\n \ oc exec -n watcher-kuttl-default openstackclient -- openstack service list -f value -c Name -c Type |[ $(grep -c ^watcher) == 1 ]\n SERVICEID=$(oc exec -n watcher-kuttl-default openstackclient -- openstack service list -f value -c Name -c Type -c ID | grep watcher| awk '{print $1}')\n [ $(oc get -n watcher-kuttl-default keystoneservice watcher -o jsonpath={.status.serviceID}) == ${SERVICEID} ]\n [ -n \"$(oc get -n watcher-kuttl-default watcher watcher-kuttl -o jsonpath={.status.hash.dbsync})\" ]\n [ \"$(oc get -n watcher-kuttl-default secret watcher-kuttl-api-config-data -o jsonpath='{.data.my\\.cnf}'|base64 -d|grep -c 'ssl=1')\" == 1 ]\n [ \"$(oc get -n watcher-kuttl-default secret watcher-kuttl-api-config-data -o jsonpath='{.data.00-default\\.conf}'|base64 -d|grep -c 'cafile = /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem')\" == 2 ]\n # If we are running the container locally, skip following test\n if [ \"$(oc get pods -n openstack-operators -o name -l openstack.org/operator-name=watcher)\" == \"\" ]; then\n exit 0\n fi\n env_variables=$(oc set env $(oc get pods -n openstack-operators -o name -l openstack.org/operator-name=watcher) -n openstack-operators --list)\n counter=0\n for i in ${env_variables}; do\n if echo ${i} | grep '_URL_DEFAULT' &> /dev/null; then\n echo ${i}\n counter=$((counter + 1))\n fi\n done\n if [ ${counter} -lt 3 ]; then\n echo \"Error: Less than 3 _URL_DEFAULT variables found.\"\n exit 1\n else\n echo \"Success: ${counter} _URL_DEFAULT variables found.\"\n fi\n ]\n logger.go:42: 07:11:43 | watcher/1-deploy-with-defaults | + oc exec -n watcher-kuttl-default openstackclient -- openstack service list -f value -c Name -c Type\n logger.go:42: 07:11:43 | watcher/1-deploy-with-defaults | ++ grep -c '^watcher'\n logger.go:42: 07:11:46 | watcher/1-deploy-with-defaults | + '[' 0 == 1 ']'\n logger.go:42: 07:11:47 | watcher/1-deploy-with-defaults | running command: [sh -c set -euxo pipefail\n oc exec -n watcher-kuttl-default openstackclient -- openstack service list -f value -c Name -c Type |[ $(grep -c ^watcher) == 1 ]\n SERVICEID=$(oc exec -n watcher-kuttl-default openstackclient -- openstack service list -f value -c Name -c Type -c ID | grep watcher| awk '{print $1}')\n [ $(oc get -n watcher-kuttl-default keystoneservice watcher -o jsonpath={.status.serviceID}) == ${SERVICEID} ]\n [ -n \"$(oc get -n watcher-kuttl-default watcher watcher-kuttl -o jsonpath={.status.hash.dbsync})\" ]\n [ \"$(oc get -n watcher-kuttl-default secret watcher-kuttl-api-config-data -o jsonpath='{.data.my\\.cnf}'|base64 -d|grep -c 'ssl=1')\" == 1 ]\n [ \"$(oc get -n watcher-kuttl-default secret watcher-kuttl-api-config-data -o jsonpath='{.data.00-default\\.conf}'|base64 -d|grep -c 'cafile = /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem')\" == 2 ]\n # If we are running the container locally, skip following test\n \ if [ \"$(oc get pods -n openstack-operators -o name -l openstack.org/operator-name=watcher)\" == \"\" ]; then\n exit 0\n fi\n env_variables=$(oc set env $(oc get pods -n openstack-operators -o name -l openstack.org/operator-name=watcher) -n openstack-operators --list)\n counter=0\n for i in ${env_variables}; do\n if echo ${i} | grep '_URL_DEFAULT' &> /dev/null; then\n echo ${i}\n counter=$((counter + 1))\n fi\n done\n if [ ${counter} -lt 3 ]; then\n echo \"Error: Less than 3 _URL_DEFAULT variables found.\"\n exit 1\n else\n echo \"Success: ${counter} _URL_DEFAULT variables found.\"\n fi\n ]\n logger.go:42: 07:11:47 | watcher/1-deploy-with-defaults | + oc exec -n watcher-kuttl-default openstackclient -- openstack service list -f value -c Name -c Type\n logger.go:42: 07:11:47 | watcher/1-deploy-with-defaults | ++ grep -c '^watcher'\n logger.go:42: 07:11:49 | watcher/1-deploy-with-defaults | + '[' 0 == 1 ']'\n logger.go:42: 07:11:50 | watcher/1-deploy-with-defaults | running command: [sh -c set -euxo pipefail\n oc exec -n watcher-kuttl-default openstackclient -- openstack service list -f value -c Name -c Type |[ $(grep -c ^watcher) == 1 ]\n SERVICEID=$(oc exec -n watcher-kuttl-default openstackclient -- openstack service list -f value -c Name -c Type -c ID | grep watcher| awk '{print $1}')\n [ $(oc get -n watcher-kuttl-default keystoneservice watcher -o jsonpath={.status.serviceID}) == ${SERVICEID} ]\n [ -n \"$(oc get -n watcher-kuttl-default watcher watcher-kuttl -o jsonpath={.status.hash.dbsync})\" ]\n [ \"$(oc get -n watcher-kuttl-default secret watcher-kuttl-api-config-data -o jsonpath='{.data.my\\.cnf}'|base64 -d|grep -c 'ssl=1')\" == 1 ]\n [ \"$(oc get -n watcher-kuttl-default secret watcher-kuttl-api-config-data -o jsonpath='{.data.00-default\\.conf}'|base64 -d|grep -c 'cafile = /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem')\" == 2 ]\n # If we are running the container locally, skip following test\n \ if [ \"$(oc get pods -n openstack-operators -o name -l openstack.org/operator-name=watcher)\" == \"\" ]; then\n exit 0\n fi\n env_variables=$(oc set env $(oc get pods -n openstack-operators -o name -l openstack.org/operator-name=watcher) -n openstack-operators --list)\n counter=0\n for i in ${env_variables}; do\n if echo ${i} | grep '_URL_DEFAULT' &> /dev/null; then\n echo ${i}\n counter=$((counter + 1))\n fi\n done\n if [ ${counter} -lt 3 ]; then\n echo \"Error: Less than 3 _URL_DEFAULT variables found.\"\n exit 1\n else\n echo \"Success: ${counter} _URL_DEFAULT variables found.\"\n fi\n ]\n logger.go:42: 07:11:50 | watcher/1-deploy-with-defaults | + oc exec -n watcher-kuttl-default openstackclient -- openstack service list -f value -c Name -c Type\n logger.go:42: 07:11:50 | watcher/1-deploy-with-defaults | ++ grep -c '^watcher'\n logger.go:42: 07:11:53 | watcher/1-deploy-with-defaults | + '[' 1 == 1 ']'\n logger.go:42: 07:11:53 | watcher/1-deploy-with-defaults | ++ oc exec -n watcher-kuttl-default openstackclient -- openstack service list -f value -c Name -c Type -c ID\n logger.go:42: 07:11:53 | watcher/1-deploy-with-defaults | ++ awk '{print $1}'\n logger.go:42: 07:11:53 | watcher/1-deploy-with-defaults | ++ grep watcher\n logger.go:42: 07:11:55 | watcher/1-deploy-with-defaults | + SERVICEID=f748c9f4bbbd40ff95d6ecdd7fa3537e\n logger.go:42: 07:11:55 | watcher/1-deploy-with-defaults | ++ oc get -n watcher-kuttl-default keystoneservice watcher -o 'jsonpath={.status.serviceID}'\n \ logger.go:42: 07:11:55 | watcher/1-deploy-with-defaults | + '[' f748c9f4bbbd40ff95d6ecdd7fa3537e == f748c9f4bbbd40ff95d6ecdd7fa3537e ']'\n logger.go:42: 07:11:55 | watcher/1-deploy-with-defaults | ++ oc get -n watcher-kuttl-default watcher watcher-kuttl -o 'jsonpath={.status.hash.dbsync}'\n \ logger.go:42: 07:11:56 | watcher/1-deploy-with-defaults | + '[' -n nbbh56bh4h647h5fbh67dh699h5bdh85hb7h65fh676h555h9bh5c4hf9h684h55dh674h564h565h6fh578hd9h56bh5fchcch687h68fh568h688h69q ']'\n logger.go:42: 07:11:56 | watcher/1-deploy-with-defaults | ++ grep -c ssl=1\n \ logger.go:42: 07:11:56 | watcher/1-deploy-with-defaults | ++ base64 -d\n logger.go:42: 07:11:56 | watcher/1-deploy-with-defaults | ++ oc get -n watcher-kuttl-default secret watcher-kuttl-api-config-data -o 'jsonpath={.data.my\\.cnf}'\n logger.go:42: 07:11:56 | watcher/1-deploy-with-defaults | + '[' 1 == 1 ']'\n logger.go:42: 07:11:56 | watcher/1-deploy-with-defaults | ++ grep -c 'cafile = /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem'\n \ logger.go:42: 07:11:56 | watcher/1-deploy-with-defaults | ++ oc get -n watcher-kuttl-default secret watcher-kuttl-api-config-data -o 'jsonpath={.data.00-default\\.conf}'\n logger.go:42: 07:11:56 | watcher/1-deploy-with-defaults | ++ base64 -d\n logger.go:42: 07:11:56 | watcher/1-deploy-with-defaults | + '[' 2 == 2 ']'\n logger.go:42: 07:11:56 | watcher/1-deploy-with-defaults | ++ oc get pods -n openstack-operators -o name -l openstack.org/operator-name=watcher\n logger.go:42: 07:11:56 | watcher/1-deploy-with-defaults | + '[' '' == '' ']'\n logger.go:42: 07:11:56 | watcher/1-deploy-with-defaults | + exit 0\n logger.go:42: 07:11:57 | watcher/1-deploy-with-defaults | running command: [sh -c set -euxo pipefail\n oc exec -n watcher-kuttl-default openstackclient -- openstack service list -f value -c Name -c Type |[ $(grep -c ^watcher) == 1 ]\n \ SERVICEID=$(oc exec -n watcher-kuttl-default openstackclient -- openstack service list -f value -c Name -c Type -c ID | grep watcher| awk '{print $1}')\n \ [ $(oc get -n watcher-kuttl-default keystoneservice watcher -o jsonpath={.status.serviceID}) == ${SERVICEID} ]\n [ -n \"$(oc get -n watcher-kuttl-default watcher watcher-kuttl -o jsonpath={.status.hash.dbsync})\" ]\n [ \"$(oc get -n watcher-kuttl-default secret watcher-kuttl-api-config-data -o jsonpath='{.data.my\\.cnf}'|base64 -d|grep -c 'ssl=1')\" == 1 ]\n [ \"$(oc get -n watcher-kuttl-default secret watcher-kuttl-api-config-data -o jsonpath='{.data.00-default\\.conf}'|base64 -d|grep -c 'cafile = /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem')\" == 2 ]\n # If we are running the container locally, skip following test\n \ if [ \"$(oc get pods -n openstack-operators -o name -l openstack.org/operator-name=watcher)\" == \"\" ]; then\n exit 0\n fi\n env_variables=$(oc set env $(oc get pods -n openstack-operators -o name -l openstack.org/operator-name=watcher) -n openstack-operators --list)\n counter=0\n for i in ${env_variables}; do\n if echo ${i} | grep '_URL_DEFAULT' &> /dev/null; then\n echo ${i}\n counter=$((counter + 1))\n fi\n done\n if [ ${counter} -lt 3 ]; then\n echo \"Error: Less than 3 _URL_DEFAULT variables found.\"\n exit 1\n else\n echo \"Success: ${counter} _URL_DEFAULT variables found.\"\n fi\n ]\n logger.go:42: 07:11:57 | watcher/1-deploy-with-defaults | + oc exec -n watcher-kuttl-default openstackclient -- openstack service list -f value -c Name -c Type\n logger.go:42: 07:11:57 | watcher/1-deploy-with-defaults | ++ grep -c '^watcher'\n logger.go:42: 07:12:00 | watcher/1-deploy-with-defaults | + '[' 1 == 1 ']'\n logger.go:42: 07:12:00 | watcher/1-deploy-with-defaults | ++ oc exec -n watcher-kuttl-default openstackclient -- openstack service list -f value -c Name -c Type -c ID\n logger.go:42: 07:12:00 | watcher/1-deploy-with-defaults | ++ grep watcher\n logger.go:42: 07:12:00 | watcher/1-deploy-with-defaults | ++ awk '{print $1}'\n logger.go:42: 07:12:02 | watcher/1-deploy-with-defaults | + SERVICEID=f748c9f4bbbd40ff95d6ecdd7fa3537e\n logger.go:42: 07:12:02 | watcher/1-deploy-with-defaults | ++ oc get -n watcher-kuttl-default keystoneservice watcher -o 'jsonpath={.status.serviceID}'\n \ logger.go:42: 07:12:02 | watcher/1-deploy-with-defaults | + '[' f748c9f4bbbd40ff95d6ecdd7fa3537e == f748c9f4bbbd40ff95d6ecdd7fa3537e ']'\n logger.go:42: 07:12:02 | watcher/1-deploy-with-defaults | ++ oc get -n watcher-kuttl-default watcher watcher-kuttl -o 'jsonpath={.status.hash.dbsync}'\n \ logger.go:42: 07:12:02 | watcher/1-deploy-with-defaults | + '[' -n nbbh56bh4h647h5fbh67dh699h5bdh85hb7h65fh676h555h9bh5c4hf9h684h55dh674h564h565h6fh578hd9h56bh5fchcch687h68fh568h688h69q ']'\n logger.go:42: 07:12:02 | watcher/1-deploy-with-defaults | ++ base64 -d\n \ logger.go:42: 07:12:02 | watcher/1-deploy-with-defaults | ++ oc get -n watcher-kuttl-default secret watcher-kuttl-api-config-data -o 'jsonpath={.data.my\\.cnf}'\n logger.go:42: 07:12:02 | watcher/1-deploy-with-defaults | ++ grep -c ssl=1\n logger.go:42: 07:12:02 | watcher/1-deploy-with-defaults | + '[' 1 == 1 ']'\n logger.go:42: 07:12:02 | watcher/1-deploy-with-defaults | ++ oc get -n watcher-kuttl-default secret watcher-kuttl-api-config-data -o 'jsonpath={.data.00-default\\.conf}'\n logger.go:42: 07:12:02 | watcher/1-deploy-with-defaults | ++ grep -c 'cafile = /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem'\n \ logger.go:42: 07:12:02 | watcher/1-deploy-with-defaults | ++ base64 -d\n logger.go:42: 07:12:02 | watcher/1-deploy-with-defaults | + '[' 2 == 2 ']'\n logger.go:42: 07:12:02 | watcher/1-deploy-with-defaults | ++ oc get pods -n openstack-operators -o name -l openstack.org/operator-name=watcher\n logger.go:42: 07:12:03 | watcher/1-deploy-with-defaults | + '[' '' == '' ']'\n logger.go:42: 07:12:03 | watcher/1-deploy-with-defaults | + exit 0\n logger.go:42: 07:12:04 | watcher/1-deploy-with-defaults | running command: [sh -c set -euxo pipefail\n oc exec -n watcher-kuttl-default openstackclient -- openstack service list -f value -c Name -c Type |[ $(grep -c ^watcher) == 1 ]\n \ SERVICEID=$(oc exec -n watcher-kuttl-default openstackclient -- openstack service list -f value -c Name -c Type -c ID | grep watcher| awk '{print $1}')\n \ [ $(oc get -n watcher-kuttl-default keystoneservice watcher -o jsonpath={.status.serviceID}) == ${SERVICEID} ]\n [ -n \"$(oc get -n watcher-kuttl-default watcher watcher-kuttl -o jsonpath={.status.hash.dbsync})\" ]\n [ \"$(oc get -n watcher-kuttl-default secret watcher-kuttl-api-config-data -o jsonpath='{.data.my\\.cnf}'|base64 -d|grep -c 'ssl=1')\" == 1 ]\n [ \"$(oc get -n watcher-kuttl-default secret watcher-kuttl-api-config-data -o jsonpath='{.data.00-default\\.conf}'|base64 -d|grep -c 'cafile = /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem')\" == 2 ]\n # If we are running the container locally, skip following test\n \ if [ \"$(oc get pods -n openstack-operators -o name -l openstack.org/operator-name=watcher)\" == \"\" ]; then\n exit 0\n fi\n env_variables=$(oc set env $(oc get pods -n openstack-operators -o name -l openstack.org/operator-name=watcher) -n openstack-operators --list)\n counter=0\n for i in ${env_variables}; do\n if echo ${i} | grep '_URL_DEFAULT' &> /dev/null; then\n echo ${i}\n counter=$((counter + 1))\n fi\n done\n if [ ${counter} -lt 3 ]; then\n echo \"Error: Less than 3 _URL_DEFAULT variables found.\"\n exit 1\n else\n echo \"Success: ${counter} _URL_DEFAULT variables found.\"\n fi\n ]\n logger.go:42: 07:12:04 | watcher/1-deploy-with-defaults | + oc exec -n watcher-kuttl-default openstackclient -- openstack service list -f value -c Name -c Type\n logger.go:42: 07:12:04 | watcher/1-deploy-with-defaults | ++ grep -c '^watcher'\n logger.go:42: 07:12:06 | watcher/1-deploy-with-defaults | + '[' 1 == 1 ']'\n logger.go:42: 07:12:06 | watcher/1-deploy-with-defaults | ++ oc exec -n watcher-kuttl-default openstackclient -- openstack service list -f value -c Name -c Type -c ID\n logger.go:42: 07:12:06 | watcher/1-deploy-with-defaults | ++ awk '{print $1}'\n logger.go:42: 07:12:06 | watcher/1-deploy-with-defaults | ++ grep watcher\n logger.go:42: 07:12:09 | watcher/1-deploy-with-defaults | + SERVICEID=f748c9f4bbbd40ff95d6ecdd7fa3537e\n logger.go:42: 07:12:09 | watcher/1-deploy-with-defaults | ++ oc get -n watcher-kuttl-default keystoneservice watcher -o 'jsonpath={.status.serviceID}'\n \ logger.go:42: 07:12:09 | watcher/1-deploy-with-defaults | + '[' f748c9f4bbbd40ff95d6ecdd7fa3537e == f748c9f4bbbd40ff95d6ecdd7fa3537e ']'\n logger.go:42: 07:12:09 | watcher/1-deploy-with-defaults | ++ oc get -n watcher-kuttl-default watcher watcher-kuttl -o 'jsonpath={.status.hash.dbsync}'\n \ logger.go:42: 07:12:09 | watcher/1-deploy-with-defaults | + '[' -n nbbh56bh4h647h5fbh67dh699h5bdh85hb7h65fh676h555h9bh5c4hf9h684h55dh674h564h565h6fh578hd9h56bh5fchcch687h68fh568h688h69q ']'\n logger.go:42: 07:12:09 | watcher/1-deploy-with-defaults | ++ oc get -n watcher-kuttl-default secret watcher-kuttl-api-config-data -o 'jsonpath={.data.my\\.cnf}'\n \ logger.go:42: 07:12:09 | watcher/1-deploy-with-defaults | ++ grep -c ssl=1\n \ logger.go:42: 07:12:09 | watcher/1-deploy-with-defaults | ++ base64 -d\n logger.go:42: 07:12:10 | watcher/1-deploy-with-defaults | + '[' 1 == 1 ']'\n logger.go:42: 07:12:10 | watcher/1-deploy-with-defaults | ++ base64 -d\n logger.go:42: 07:12:10 | watcher/1-deploy-with-defaults | ++ grep -c 'cafile = /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem'\n \ logger.go:42: 07:12:10 | watcher/1-deploy-with-defaults | ++ oc get -n watcher-kuttl-default secret watcher-kuttl-api-config-data -o 'jsonpath={.data.00-default\\.conf}'\n logger.go:42: 07:12:10 | watcher/1-deploy-with-defaults | + '[' 2 == 2 ']'\n logger.go:42: 07:12:10 | watcher/1-deploy-with-defaults | ++ oc get pods -n openstack-operators -o name -l openstack.org/operator-name=watcher\n logger.go:42: 07:12:10 | watcher/1-deploy-with-defaults | + '[' '' == '' ']'\n logger.go:42: 07:12:10 | watcher/1-deploy-with-defaults | + exit 0\n logger.go:42: 07:12:11 | watcher/1-deploy-with-defaults | running command: [sh -c set -euxo pipefail\n oc exec -n watcher-kuttl-default openstackclient -- openstack service list -f value -c Name -c Type |[ $(grep -c ^watcher) == 1 ]\n \ SERVICEID=$(oc exec -n watcher-kuttl-default openstackclient -- openstack service list -f value -c Name -c Type -c ID | grep watcher| awk '{print $1}')\n \ [ $(oc get -n watcher-kuttl-default keystoneservice watcher -o jsonpath={.status.serviceID}) == ${SERVICEID} ]\n [ -n \"$(oc get -n watcher-kuttl-default watcher watcher-kuttl -o jsonpath={.status.hash.dbsync})\" ]\n [ \"$(oc get -n watcher-kuttl-default secret watcher-kuttl-api-config-data -o jsonpath='{.data.my\\.cnf}'|base64 -d|grep -c 'ssl=1')\" == 1 ]\n [ \"$(oc get -n watcher-kuttl-default secret watcher-kuttl-api-config-data -o jsonpath='{.data.00-default\\.conf}'|base64 -d|grep -c 'cafile = /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem')\" == 2 ]\n # If we are running the container locally, skip following test\n \ if [ \"$(oc get pods -n openstack-operators -o name -l openstack.org/operator-name=watcher)\" == \"\" ]; then\n exit 0\n fi\n env_variables=$(oc set env $(oc get pods -n openstack-operators -o name -l openstack.org/operator-name=watcher) -n openstack-operators --list)\n counter=0\n for i in ${env_variables}; do\n if echo ${i} | grep '_URL_DEFAULT' &> /dev/null; then\n echo ${i}\n counter=$((counter + 1))\n fi\n done\n if [ ${counter} -lt 3 ]; then\n echo \"Error: Less than 3 _URL_DEFAULT variables found.\"\n exit 1\n else\n echo \"Success: ${counter} _URL_DEFAULT variables found.\"\n fi\n ]\n logger.go:42: 07:12:11 | watcher/1-deploy-with-defaults | + oc exec -n watcher-kuttl-default openstackclient -- openstack service list -f value -c Name -c Type\n logger.go:42: 07:12:11 | watcher/1-deploy-with-defaults | ++ grep -c '^watcher'\n logger.go:42: 07:12:13 | watcher/1-deploy-with-defaults | + '[' 1 == 1 ']'\n logger.go:42: 07:12:13 | watcher/1-deploy-with-defaults | ++ oc exec -n watcher-kuttl-default openstackclient -- openstack service list -f value -c Name -c Type -c ID\n logger.go:42: 07:12:13 | watcher/1-deploy-with-defaults | ++ grep watcher\n logger.go:42: 07:12:13 | watcher/1-deploy-with-defaults | ++ awk '{print $1}'\n logger.go:42: 07:12:16 | watcher/1-deploy-with-defaults | + SERVICEID=f748c9f4bbbd40ff95d6ecdd7fa3537e\n logger.go:42: 07:12:16 | watcher/1-deploy-with-defaults | ++ oc get -n watcher-kuttl-default keystoneservice watcher -o 'jsonpath={.status.serviceID}'\n \ logger.go:42: 07:12:16 | watcher/1-deploy-with-defaults | + '[' f748c9f4bbbd40ff95d6ecdd7fa3537e == f748c9f4bbbd40ff95d6ecdd7fa3537e ']'\n logger.go:42: 07:12:16 | watcher/1-deploy-with-defaults | ++ oc get -n watcher-kuttl-default watcher watcher-kuttl -o 'jsonpath={.status.hash.dbsync}'\n \ logger.go:42: 07:12:16 | watcher/1-deploy-with-defaults | + '[' -n nbbh56bh4h647h5fbh67dh699h5bdh85hb7h65fh676h555h9bh5c4hf9h684h55dh674h564h565h6fh578hd9h56bh5fchcch687h68fh568h688h69q ']'\n logger.go:42: 07:12:16 | watcher/1-deploy-with-defaults | ++ base64 -d\n \ logger.go:42: 07:12:16 | watcher/1-deploy-with-defaults | ++ grep -c ssl=1\n \ logger.go:42: 07:12:16 | watcher/1-deploy-with-defaults | ++ oc get -n watcher-kuttl-default secret watcher-kuttl-api-config-data -o 'jsonpath={.data.my\\.cnf}'\n logger.go:42: 07:12:16 | watcher/1-deploy-with-defaults | + '[' 1 == 1 ']'\n logger.go:42: 07:12:16 | watcher/1-deploy-with-defaults | ++ oc get -n watcher-kuttl-default secret watcher-kuttl-api-config-data -o 'jsonpath={.data.00-default\\.conf}'\n logger.go:42: 07:12:16 | watcher/1-deploy-with-defaults | ++ base64 -d\n logger.go:42: 07:12:16 | watcher/1-deploy-with-defaults | ++ grep -c 'cafile = /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem'\n \ logger.go:42: 07:12:16 | watcher/1-deploy-with-defaults | + '[' 2 == 2 ']'\n \ logger.go:42: 07:12:16 | watcher/1-deploy-with-defaults | ++ oc get pods -n openstack-operators -o name -l openstack.org/operator-name=watcher\n logger.go:42: 07:12:16 | watcher/1-deploy-with-defaults | + '[' '' == '' ']'\n logger.go:42: 07:12:16 | watcher/1-deploy-with-defaults | + exit 0\n logger.go:42: 07:12:16 | watcher/1-deploy-with-defaults | test step completed 1-deploy-with-defaults\n \ logger.go:42: 07:12:16 | watcher/2-cleanup-watcher | starting test step 2-cleanup-watcher\n \ logger.go:42: 07:12:16 | watcher/2-cleanup-watcher | test step completed 2-cleanup-watcher\n \ logger.go:42: 07:12:16 | watcher/3-precreate-mariadbaccount | starting test step 3-precreate-mariadbaccount\n logger.go:42: 07:12:16 | watcher/3-precreate-mariadbaccount | MariaDBAccount:watcher-kuttl-default/watcher-precreated created\n logger.go:42: 07:12:16 | watcher/3-precreate-mariadbaccount | test step completed 3-precreate-mariadbaccount\n \ logger.go:42: 07:12:16 | watcher/4-deploy-with-precreated-account | starting test step 4-deploy-with-precreated-account\n logger.go:42: 07:12:16 | watcher/4-deploy-with-precreated-account | Secret:wa**********ig created\n logger.go:42: 07:12:17 | watcher/4-deploy-with-precreated-account | Watcher:watcher-kuttl-default/watcher-kuttl created\n logger.go:42: 07:12:17 | watcher/4-deploy-with-precreated-account | running command: [sh -c set -euxo pipefail\n oc project watcher-kuttl-default\n \ APIPOD=$(oc get pods -n watcher-kuttl-default -l \"service=watcher-api\" -ocustom-columns=:metadata.name|grep -v ^$|head -1)\n if [ -n \"${APIPOD}\" ]; then\n [ $(echo $(oc rsh -c watcher-api ${APIPOD} cat /etc/watcher/watcher.conf.d/01-global-custom.conf) |grep -c \"^# Global config\") == 1 ]\n [ $(echo $(oc rsh -c watcher-api ${APIPOD} cat /etc/watcher/watcher.conf.d/02-service-custom.conf) |grep -c \"^# Service config\") == 1 ]\n [ $(echo $(oc rsh -c watcher-api ${APIPOD} cat /etc/watcher/watcher.conf.d/00-default.conf) |grep -czPo '\\[prometheus_client\\]\\s+host\\s+=\\s+metric-storage-prometheus.watcher-kuttl-default.svc\\s+port\\s+=\\s+9090\\s+cafile\\s+=\\s+/etc/pki/ca-trust/extracted/pem/prometheus/internal-ca-bundle.pem') == 1 ]\n [ $(echo $(oc rsh -c watcher-api ${APIPOD} cat /etc/httpd/conf.d/10-watcher-wsgi-main.conf) |grep -czPo 'TimeOut 80') == 1 ]\n else\n exit 1\n fi\n ]\n \ logger.go:42: 07:12:17 | watcher/4-deploy-with-precreated-account | + oc project watcher-kuttl-default\n logger.go:42: 07:12:17 | watcher/4-deploy-with-precreated-account | Already on project \"watcher-kuttl-default\" on server \"https://api.crc.testing:6443\".\n \ logger.go:42: 07:12:17 | watcher/4-deploy-with-precreated-account | ++ oc get pods -n watcher-kuttl-default -l service=watcher-api -ocustom-columns=:metadata.name\n \ logger.go:42: 07:12:17 | watcher/4-deploy-with-precreated-account | ++ head -1\n logger.go:42: 07:12:17 | watcher/4-deploy-with-precreated-account | ++ grep -v '^$'\n logger.go:42: 07:12:17 | watcher/4-deploy-with-precreated-account | + APIPOD=watcher-kuttl-api-0\n logger.go:42: 07:12:17 | watcher/4-deploy-with-precreated-account | + '[' -n watcher-kuttl-api-0 ']'\n logger.go:42: 07:12:17 | watcher/4-deploy-with-precreated-account | ++ grep -c '^# Global config'\n logger.go:42: 07:12:17 | watcher/4-deploy-with-precreated-account | +++ oc rsh -c watcher-api watcher-kuttl-api-0 cat /etc/watcher/watcher.conf.d/01-global-custom.conf\n \ logger.go:42: 07:12:19 | watcher/4-deploy-with-precreated-account | error: Internal error occurred: error executing command in container: container is not created or running\n logger.go:42: 07:12:19 | watcher/4-deploy-with-precreated-account | ++ echo\n logger.go:42: 07:12:19 | watcher/4-deploy-with-precreated-account | + '[' 0 == 1 ']'\n logger.go:42: 07:12:20 | watcher/4-deploy-with-precreated-account | running command: [sh -c set -euxo pipefail\n oc project watcher-kuttl-default\n \ APIPOD=$(oc get pods -n watcher-kuttl-default -l \"service=watcher-api\" -ocustom-columns=:metadata.name|grep -v ^$|head -1)\n if [ -n \"${APIPOD}\" ]; then\n [ $(echo $(oc rsh -c watcher-api ${APIPOD} cat /etc/watcher/watcher.conf.d/01-global-custom.conf) |grep -c \"^# Global config\") == 1 ]\n [ $(echo $(oc rsh -c watcher-api ${APIPOD} cat /etc/watcher/watcher.conf.d/02-service-custom.conf) |grep -c \"^# Service config\") == 1 ]\n [ $(echo $(oc rsh -c watcher-api ${APIPOD} cat /etc/watcher/watcher.conf.d/00-default.conf) |grep -czPo '\\[prometheus_client\\]\\s+host\\s+=\\s+metric-storage-prometheus.watcher-kuttl-default.svc\\s+port\\s+=\\s+9090\\s+cafile\\s+=\\s+/etc/pki/ca-trust/extracted/pem/prometheus/internal-ca-bundle.pem') == 1 ]\n [ $(echo $(oc rsh -c watcher-api ${APIPOD} cat /etc/httpd/conf.d/10-watcher-wsgi-main.conf) |grep -czPo 'TimeOut 80') == 1 ]\n else\n exit 1\n fi\n ]\n \ logger.go:42: 07:12:20 | watcher/4-deploy-with-precreated-account | + oc project watcher-kuttl-default\n logger.go:42: 07:12:20 | watcher/4-deploy-with-precreated-account | Already on project \"watcher-kuttl-default\" on server \"https://api.crc.testing:6443\".\n \ logger.go:42: 07:12:20 | watcher/4-deploy-with-precreated-account | ++ oc get pods -n watcher-kuttl-default -l service=watcher-api -ocustom-columns=:metadata.name\n \ logger.go:42: 07:12:20 | watcher/4-deploy-with-precreated-account | ++ head -1\n logger.go:42: 07:12:20 | watcher/4-deploy-with-precreated-account | ++ grep -v '^$'\n logger.go:42: 07:12:20 | watcher/4-deploy-with-precreated-account | + APIPOD=\n logger.go:42: 07:12:21 | watcher/4-deploy-with-precreated-account | running command: [sh -c set -euxo pipefail\n oc project watcher-kuttl-default\n \ APIPOD=$(oc get pods -n watcher-kuttl-default -l \"service=watcher-api\" -ocustom-columns=:metadata.name|grep -v ^$|head -1)\n if [ -n \"${APIPOD}\" ]; then\n [ $(echo $(oc rsh -c watcher-api ${APIPOD} cat /etc/watcher/watcher.conf.d/01-global-custom.conf) |grep -c \"^# Global config\") == 1 ]\n [ $(echo $(oc rsh -c watcher-api ${APIPOD} cat /etc/watcher/watcher.conf.d/02-service-custom.conf) |grep -c \"^# Service config\") == 1 ]\n [ $(echo $(oc rsh -c watcher-api ${APIPOD} cat /etc/watcher/watcher.conf.d/00-default.conf) |grep -czPo '\\[prometheus_client\\]\\s+host\\s+=\\s+metric-storage-prometheus.watcher-kuttl-default.svc\\s+port\\s+=\\s+9090\\s+cafile\\s+=\\s+/etc/pki/ca-trust/extracted/pem/prometheus/internal-ca-bundle.pem') == 1 ]\n [ $(echo $(oc rsh -c watcher-api ${APIPOD} cat /etc/httpd/conf.d/10-watcher-wsgi-main.conf) |grep -czPo 'TimeOut 80') == 1 ]\n else\n exit 1\n fi\n ]\n \ logger.go:42: 07:12:21 | watcher/4-deploy-with-precreated-account | + oc project watcher-kuttl-default\n logger.go:42: 07:12:21 | watcher/4-deploy-with-precreated-account | Already on project \"watcher-kuttl-default\" on server \"https://api.crc.testing:6443\".\n \ logger.go:42: 07:12:21 | watcher/4-deploy-with-precreated-account | ++ oc get pods -n watcher-kuttl-default -l service=watcher-api -ocustom-columns=:metadata.name\n \ logger.go:42: 07:12:21 | watcher/4-deploy-with-precreated-account | ++ head -1\n logger.go:42: 07:12:21 | watcher/4-deploy-with-precreated-account | ++ grep -v '^$'\n logger.go:42: 07:12:22 | watcher/4-deploy-with-precreated-account | + APIPOD=\n logger.go:42: 07:12:23 | watcher/4-deploy-with-precreated-account | running command: [sh -c set -euxo pipefail\n oc project watcher-kuttl-default\n \ APIPOD=$(oc get pods -n watcher-kuttl-default -l \"service=watcher-api\" -ocustom-columns=:metadata.name|grep -v ^$|head -1)\n if [ -n \"${APIPOD}\" ]; then\n [ $(echo $(oc rsh -c watcher-api ${APIPOD} cat /etc/watcher/watcher.conf.d/01-global-custom.conf) |grep -c \"^# Global config\") == 1 ]\n [ $(echo $(oc rsh -c watcher-api ${APIPOD} cat /etc/watcher/watcher.conf.d/02-service-custom.conf) |grep -c \"^# Service config\") == 1 ]\n [ $(echo $(oc rsh -c watcher-api ${APIPOD} cat /etc/watcher/watcher.conf.d/00-default.conf) |grep -czPo '\\[prometheus_client\\]\\s+host\\s+=\\s+metric-storage-prometheus.watcher-kuttl-default.svc\\s+port\\s+=\\s+9090\\s+cafile\\s+=\\s+/etc/pki/ca-trust/extracted/pem/prometheus/internal-ca-bundle.pem') == 1 ]\n [ $(echo $(oc rsh -c watcher-api ${APIPOD} cat /etc/httpd/conf.d/10-watcher-wsgi-main.conf) |grep -czPo 'TimeOut 80') == 1 ]\n else\n exit 1\n fi\n ]\n \ logger.go:42: 07:12:23 | watcher/4-deploy-with-precreated-account | + oc project watcher-kuttl-default\n logger.go:42: 07:12:23 | watcher/4-deploy-with-precreated-account | Already on project \"watcher-kuttl-default\" on server \"https://api.crc.testing:6443\".\n \ logger.go:42: 07:12:23 | watcher/4-deploy-with-precreated-account | ++ oc get pods -n watcher-kuttl-default -l service=watcher-api -ocustom-columns=:metadata.name\n \ logger.go:42: 07:12:23 | watcher/4-deploy-with-precreated-account | ++ head -1\n logger.go:42: 07:12:23 | watcher/4-deploy-with-precreated-account | ++ grep -v '^$'\n logger.go:42: 07:12:23 | watcher/4-deploy-with-precreated-account | + APIPOD=\n logger.go:42: 07:12:24 | watcher/4-deploy-with-precreated-account | running command: [sh -c set -euxo pipefail\n oc project watcher-kuttl-default\n \ APIPOD=$(oc get pods -n watcher-kuttl-default -l \"service=watcher-api\" -ocustom-columns=:metadata.name|grep -v ^$|head -1)\n if [ -n \"${APIPOD}\" ]; then\n [ $(echo $(oc rsh -c watcher-api ${APIPOD} cat /etc/watcher/watcher.conf.d/01-global-custom.conf) |grep -c \"^# Global config\") == 1 ]\n [ $(echo $(oc rsh -c watcher-api ${APIPOD} cat /etc/watcher/watcher.conf.d/02-service-custom.conf) |grep -c \"^# Service config\") == 1 ]\n [ $(echo $(oc rsh -c watcher-api ${APIPOD} cat /etc/watcher/watcher.conf.d/00-default.conf) |grep -czPo '\\[prometheus_client\\]\\s+host\\s+=\\s+metric-storage-prometheus.watcher-kuttl-default.svc\\s+port\\s+=\\s+9090\\s+cafile\\s+=\\s+/etc/pki/ca-trust/extracted/pem/prometheus/internal-ca-bundle.pem') == 1 ]\n [ $(echo $(oc rsh -c watcher-api ${APIPOD} cat /etc/httpd/conf.d/10-watcher-wsgi-main.conf) |grep -czPo 'TimeOut 80') == 1 ]\n else\n exit 1\n fi\n ]\n \ logger.go:42: 07:12:24 | watcher/4-deploy-with-precreated-account | + oc project watcher-kuttl-default\n logger.go:42: 07:12:24 | watcher/4-deploy-with-precreated-account | Already on project \"watcher-kuttl-default\" on server \"https://api.crc.testing:6443\".\n \ logger.go:42: 07:12:24 | watcher/4-deploy-with-precreated-account | ++ oc get pods -n watcher-kuttl-default -l service=watcher-api -ocustom-columns=:metadata.name\n \ logger.go:42: 07:12:24 | watcher/4-deploy-with-precreated-account | ++ head -1\n logger.go:42: 07:12:24 | watcher/4-deploy-with-precreated-account | ++ grep -v '^$'\n logger.go:42: 07:12:24 | watcher/4-deploy-with-precreated-account | + APIPOD=\n logger.go:42: 07:12:25 | watcher/4-deploy-with-precreated-account | running command: [sh -c set -euxo pipefail\n oc project watcher-kuttl-default\n \ APIPOD=$(oc get pods -n watcher-kuttl-default -l \"service=watcher-api\" -ocustom-columns=:metadata.name|grep -v ^$|head -1)\n if [ -n \"${APIPOD}\" ]; then\n [ $(echo $(oc rsh -c watcher-api ${APIPOD} cat /etc/watcher/watcher.conf.d/01-global-custom.conf) |grep -c \"^# Global config\") == 1 ]\n [ $(echo $(oc rsh -c watcher-api ${APIPOD} cat /etc/watcher/watcher.conf.d/02-service-custom.conf) |grep -c \"^# Service config\") == 1 ]\n [ $(echo $(oc rsh -c watcher-api ${APIPOD} cat /etc/watcher/watcher.conf.d/00-default.conf) |grep -czPo '\\[prometheus_client\\]\\s+host\\s+=\\s+metric-storage-prometheus.watcher-kuttl-default.svc\\s+port\\s+=\\s+9090\\s+cafile\\s+=\\s+/etc/pki/ca-trust/extracted/pem/prometheus/internal-ca-bundle.pem') == 1 ]\n [ $(echo $(oc rsh -c watcher-api ${APIPOD} cat /etc/httpd/conf.d/10-watcher-wsgi-main.conf) |grep -czPo 'TimeOut 80') == 1 ]\n else\n exit 1\n fi\n ]\n \ logger.go:42: 07:12:25 | watcher/4-deploy-with-precreated-account | + oc project watcher-kuttl-default\n logger.go:42: 07:12:25 | watcher/4-deploy-with-precreated-account | Already on project \"watcher-kuttl-default\" on server \"https://api.crc.testing:6443\".\n \ logger.go:42: 07:12:25 | watcher/4-deploy-with-precreated-account | ++ oc get pods -n watcher-kuttl-default -l service=watcher-api -ocustom-columns=:metadata.name\n \ logger.go:42: 07:12:25 | watcher/4-deploy-with-precreated-account | ++ head -1\n logger.go:42: 07:12:25 | watcher/4-deploy-with-precreated-account | ++ grep -v '^$'\n logger.go:42: 07:12:26 | watcher/4-deploy-with-precreated-account | + APIPOD=\n logger.go:42: 07:12:27 | watcher/4-deploy-with-precreated-account | running command: [sh -c set -euxo pipefail\n oc project watcher-kuttl-default\n \ APIPOD=$(oc get pods -n watcher-kuttl-default -l \"service=watcher-api\" -ocustom-columns=:metadata.name|grep -v ^$|head -1)\n if [ -n \"${APIPOD}\" ]; then\n [ $(echo $(oc rsh -c watcher-api ${APIPOD} cat /etc/watcher/watcher.conf.d/01-global-custom.conf) |grep -c \"^# Global config\") == 1 ]\n [ $(echo $(oc rsh -c watcher-api ${APIPOD} cat /etc/watcher/watcher.conf.d/02-service-custom.conf) |grep -c \"^# Service config\") == 1 ]\n [ $(echo $(oc rsh -c watcher-api ${APIPOD} cat /etc/watcher/watcher.conf.d/00-default.conf) |grep -czPo '\\[prometheus_client\\]\\s+host\\s+=\\s+metric-storage-prometheus.watcher-kuttl-default.svc\\s+port\\s+=\\s+9090\\s+cafile\\s+=\\s+/etc/pki/ca-trust/extracted/pem/prometheus/internal-ca-bundle.pem') == 1 ]\n [ $(echo $(oc rsh -c watcher-api ${APIPOD} cat /etc/httpd/conf.d/10-watcher-wsgi-main.conf) |grep -czPo 'TimeOut 80') == 1 ]\n else\n exit 1\n fi\n ]\n \ logger.go:42: 07:12:27 | watcher/4-deploy-with-precreated-account | + oc project watcher-kuttl-default\n logger.go:42: 07:12:27 | watcher/4-deploy-with-precreated-account | Already on project \"watcher-kuttl-default\" on server \"https://api.crc.testing:6443\".\n \ logger.go:42: 07:12:27 | watcher/4-deploy-with-precreated-account | ++ oc get pods -n watcher-kuttl-default -l service=watcher-api -ocustom-columns=:metadata.name\n \ logger.go:42: 07:12:27 | watcher/4-deploy-with-precreated-account | ++ grep -v '^$'\n logger.go:42: 07:12:27 | watcher/4-deploy-with-precreated-account | ++ head -1\n logger.go:42: 07:12:27 | watcher/4-deploy-with-precreated-account | + APIPOD=\n logger.go:42: 07:12:28 | watcher/4-deploy-with-precreated-account | running command: [sh -c set -euxo pipefail\n oc project watcher-kuttl-default\n \ APIPOD=$(oc get pods -n watcher-kuttl-default -l \"service=watcher-api\" -ocustom-columns=:metadata.name|grep -v ^$|head -1)\n if [ -n \"${APIPOD}\" ]; then\n [ $(echo $(oc rsh -c watcher-api ${APIPOD} cat /etc/watcher/watcher.conf.d/01-global-custom.conf) |grep -c \"^# Global config\") == 1 ]\n [ $(echo $(oc rsh -c watcher-api ${APIPOD} cat /etc/watcher/watcher.conf.d/02-service-custom.conf) |grep -c \"^# Service config\") == 1 ]\n [ $(echo $(oc rsh -c watcher-api ${APIPOD} cat /etc/watcher/watcher.conf.d/00-default.conf) |grep -czPo '\\[prometheus_client\\]\\s+host\\s+=\\s+metric-storage-prometheus.watcher-kuttl-default.svc\\s+port\\s+=\\s+9090\\s+cafile\\s+=\\s+/etc/pki/ca-trust/extracted/pem/prometheus/internal-ca-bundle.pem') == 1 ]\n [ $(echo $(oc rsh -c watcher-api ${APIPOD} cat /etc/httpd/conf.d/10-watcher-wsgi-main.conf) |grep -czPo 'TimeOut 80') == 1 ]\n else\n exit 1\n fi\n ]\n \ logger.go:42: 07:12:28 | watcher/4-deploy-with-precreated-account | + oc project watcher-kuttl-default\n logger.go:42: 07:12:28 | watcher/4-deploy-with-precreated-account | Already on project \"watcher-kuttl-default\" on server \"https://api.crc.testing:6443\".\n \ logger.go:42: 07:12:28 | watcher/4-deploy-with-precreated-account | ++ oc get pods -n watcher-kuttl-default -l service=watcher-api -ocustom-columns=:metadata.name\n \ logger.go:42: 07:12:28 | watcher/4-deploy-with-precreated-account | ++ head -1\n logger.go:42: 07:12:28 | watcher/4-deploy-with-precreated-account | ++ grep -v '^$'\n logger.go:42: 07:12:29 | watcher/4-deploy-with-precreated-account | + APIPOD=\n logger.go:42: 07:12:30 | watcher/4-deploy-with-precreated-account | running command: [sh -c set -euxo pipefail\n oc project watcher-kuttl-default\n \ APIPOD=$(oc get pods -n watcher-kuttl-default -l \"service=watcher-api\" -ocustom-columns=:metadata.name|grep -v ^$|head -1)\n if [ -n \"${APIPOD}\" ]; then\n [ $(echo $(oc rsh -c watcher-api ${APIPOD} cat /etc/watcher/watcher.conf.d/01-global-custom.conf) |grep -c \"^# Global config\") == 1 ]\n [ $(echo $(oc rsh -c watcher-api ${APIPOD} cat /etc/watcher/watcher.conf.d/02-service-custom.conf) |grep -c \"^# Service config\") == 1 ]\n [ $(echo $(oc rsh -c watcher-api ${APIPOD} cat /etc/watcher/watcher.conf.d/00-default.conf) |grep -czPo '\\[prometheus_client\\]\\s+host\\s+=\\s+metric-storage-prometheus.watcher-kuttl-default.svc\\s+port\\s+=\\s+9090\\s+cafile\\s+=\\s+/etc/pki/ca-trust/extracted/pem/prometheus/internal-ca-bundle.pem') == 1 ]\n [ $(echo $(oc rsh -c watcher-api ${APIPOD} cat /etc/httpd/conf.d/10-watcher-wsgi-main.conf) |grep -czPo 'TimeOut 80') == 1 ]\n else\n exit 1\n fi\n ]\n \ logger.go:42: 07:12:30 | watcher/4-deploy-with-precreated-account | + oc project watcher-kuttl-default\n logger.go:42: 07:12:30 | watcher/4-deploy-with-precreated-account | Already on project \"watcher-kuttl-default\" on server \"https://api.crc.testing:6443\".\n \ logger.go:42: 07:12:30 | watcher/4-deploy-with-precreated-account | ++ oc get pods -n watcher-kuttl-default -l service=watcher-api -ocustom-columns=:metadata.name\n \ logger.go:42: 07:12:30 | watcher/4-deploy-with-precreated-account | ++ head -1\n logger.go:42: 07:12:30 | watcher/4-deploy-with-precreated-account | ++ grep -v '^$'\n logger.go:42: 07:12:30 | watcher/4-deploy-with-precreated-account | + APIPOD=\n logger.go:42: 07:12:31 | watcher/4-deploy-with-precreated-account | running command: [sh -c set -euxo pipefail\n oc project watcher-kuttl-default\n \ APIPOD=$(oc get pods -n watcher-kuttl-default -l \"service=watcher-api\" -ocustom-columns=:metadata.name|grep -v ^$|head -1)\n if [ -n \"${APIPOD}\" ]; then\n [ $(echo $(oc rsh -c watcher-api ${APIPOD} cat /etc/watcher/watcher.conf.d/01-global-custom.conf) |grep -c \"^# Global config\") == 1 ]\n [ $(echo $(oc rsh -c watcher-api ${APIPOD} cat /etc/watcher/watcher.conf.d/02-service-custom.conf) |grep -c \"^# Service config\") == 1 ]\n [ $(echo $(oc rsh -c watcher-api ${APIPOD} cat /etc/watcher/watcher.conf.d/00-default.conf) |grep -czPo '\\[prometheus_client\\]\\s+host\\s+=\\s+metric-storage-prometheus.watcher-kuttl-default.svc\\s+port\\s+=\\s+9090\\s+cafile\\s+=\\s+/etc/pki/ca-trust/extracted/pem/prometheus/internal-ca-bundle.pem') == 1 ]\n [ $(echo $(oc rsh -c watcher-api ${APIPOD} cat /etc/httpd/conf.d/10-watcher-wsgi-main.conf) |grep -czPo 'TimeOut 80') == 1 ]\n else\n exit 1\n fi\n ]\n \ logger.go:42: 07:12:31 | watcher/4-deploy-with-precreated-account | + oc project watcher-kuttl-default\n logger.go:42: 07:12:31 | watcher/4-deploy-with-precreated-account | Already on project \"watcher-kuttl-default\" on server \"https://api.crc.testing:6443\".\n \ logger.go:42: 07:12:31 | watcher/4-deploy-with-precreated-account | ++ oc get pods -n watcher-kuttl-default -l service=watcher-api -ocustom-columns=:metadata.name\n \ logger.go:42: 07:12:31 | watcher/4-deploy-with-precreated-account | ++ head -1\n logger.go:42: 07:12:31 | watcher/4-deploy-with-precreated-account | ++ grep -v '^$'\n logger.go:42: 07:12:31 | watcher/4-deploy-with-precreated-account | + APIPOD=\n logger.go:42: 07:12:32 | watcher/4-deploy-with-precreated-account | running command: [sh -c set -euxo pipefail\n oc project watcher-kuttl-default\n \ APIPOD=$(oc get pods -n watcher-kuttl-default -l \"service=watcher-api\" -ocustom-columns=:metadata.name|grep -v ^$|head -1)\n if [ -n \"${APIPOD}\" ]; then\n [ $(echo $(oc rsh -c watcher-api ${APIPOD} cat /etc/watcher/watcher.conf.d/01-global-custom.conf) |grep -c \"^# Global config\") == 1 ]\n [ $(echo $(oc rsh -c watcher-api ${APIPOD} cat /etc/watcher/watcher.conf.d/02-service-custom.conf) |grep -c \"^# Service config\") == 1 ]\n [ $(echo $(oc rsh -c watcher-api ${APIPOD} cat /etc/watcher/watcher.conf.d/00-default.conf) |grep -czPo '\\[prometheus_client\\]\\s+host\\s+=\\s+metric-storage-prometheus.watcher-kuttl-default.svc\\s+port\\s+=\\s+9090\\s+cafile\\s+=\\s+/etc/pki/ca-trust/extracted/pem/prometheus/internal-ca-bundle.pem') == 1 ]\n [ $(echo $(oc rsh -c watcher-api ${APIPOD} cat /etc/httpd/conf.d/10-watcher-wsgi-main.conf) |grep -czPo 'TimeOut 80') == 1 ]\n else\n exit 1\n fi\n ]\n \ logger.go:42: 07:12:32 | watcher/4-deploy-with-precreated-account | + oc project watcher-kuttl-default\n logger.go:42: 07:12:33 | watcher/4-deploy-with-precreated-account | Already on project \"watcher-kuttl-default\" on server \"https://api.crc.testing:6443\".\n \ logger.go:42: 07:12:33 | watcher/4-deploy-with-precreated-account | ++ oc get pods -n watcher-kuttl-default -l service=watcher-api -ocustom-columns=:metadata.name\n \ logger.go:42: 07:12:33 | watcher/4-deploy-with-precreated-account | ++ head -1\n logger.go:42: 07:12:33 | watcher/4-deploy-with-precreated-account | ++ grep -v '^$'\n logger.go:42: 07:12:33 | watcher/4-deploy-with-precreated-account | + APIPOD=\n logger.go:42: 07:12:34 | watcher/4-deploy-with-precreated-account | running command: [sh -c set -euxo pipefail\n oc project watcher-kuttl-default\n \ APIPOD=$(oc get pods -n watcher-kuttl-default -l \"service=watcher-api\" -ocustom-columns=:metadata.name|grep -v ^$|head -1)\n if [ -n \"${APIPOD}\" ]; then\n [ $(echo $(oc rsh -c watcher-api ${APIPOD} cat /etc/watcher/watcher.conf.d/01-global-custom.conf) |grep -c \"^# Global config\") == 1 ]\n [ $(echo $(oc rsh -c watcher-api ${APIPOD} cat /etc/watcher/watcher.conf.d/02-service-custom.conf) |grep -c \"^# Service config\") == 1 ]\n [ $(echo $(oc rsh -c watcher-api ${APIPOD} cat /etc/watcher/watcher.conf.d/00-default.conf) |grep -czPo '\\[prometheus_client\\]\\s+host\\s+=\\s+metric-storage-prometheus.watcher-kuttl-default.svc\\s+port\\s+=\\s+9090\\s+cafile\\s+=\\s+/etc/pki/ca-trust/extracted/pem/prometheus/internal-ca-bundle.pem') == 1 ]\n [ $(echo $(oc rsh -c watcher-api ${APIPOD} cat /etc/httpd/conf.d/10-watcher-wsgi-main.conf) |grep -czPo 'TimeOut 80') == 1 ]\n else\n exit 1\n fi\n ]\n \ logger.go:42: 07:12:34 | watcher/4-deploy-with-precreated-account | + oc project watcher-kuttl-default\n logger.go:42: 07:12:34 | watcher/4-deploy-with-precreated-account | Already on project \"watcher-kuttl-default\" on server \"https://api.crc.testing:6443\".\n \ logger.go:42: 07:12:34 | watcher/4-deploy-with-precreated-account | ++ oc get pods -n watcher-kuttl-default -l service=watcher-api -ocustom-columns=:metadata.name\n \ logger.go:42: 07:12:34 | watcher/4-deploy-with-precreated-account | ++ head -1\n logger.go:42: 07:12:34 | watcher/4-deploy-with-precreated-account | ++ grep -v '^$'\n logger.go:42: 07:12:34 | watcher/4-deploy-with-precreated-account | + APIPOD=watcher-kuttl-api-0\n logger.go:42: 07:12:34 | watcher/4-deploy-with-precreated-account | + '[' -n watcher-kuttl-api-0 ']'\n logger.go:42: 07:12:34 | watcher/4-deploy-with-precreated-account | ++ grep -c '^# Global config'\n logger.go:42: 07:12:34 | watcher/4-deploy-with-precreated-account | +++ oc rsh -c watcher-api watcher-kuttl-api-0 cat /etc/watcher/watcher.conf.d/01-global-custom.conf\n \ logger.go:42: 07:12:34 | watcher/4-deploy-with-precreated-account | error: unable to upgrade connection: container not found (\"watcher-api\")\n logger.go:42: 07:12:34 | watcher/4-deploy-with-precreated-account | ++ echo\n logger.go:42: 07:12:34 | watcher/4-deploy-with-precreated-account | + '[' 0 == 1 ']'\n logger.go:42: 07:12:36 | watcher/4-deploy-with-precreated-account | running command: [sh -c set -euxo pipefail\n oc project watcher-kuttl-default\n APIPOD=$(oc get pods -n watcher-kuttl-default -l \"service=watcher-api\" -ocustom-columns=:metadata.name|grep -v ^$|head -1)\n if [ -n \"${APIPOD}\" ]; then\n [ $(echo $(oc rsh -c watcher-api ${APIPOD} cat /etc/watcher/watcher.conf.d/01-global-custom.conf) |grep -c \"^# Global config\") == 1 ]\n [ $(echo $(oc rsh -c watcher-api ${APIPOD} cat /etc/watcher/watcher.conf.d/02-service-custom.conf) |grep -c \"^# Service config\") == 1 ]\n [ $(echo $(oc rsh -c watcher-api ${APIPOD} cat /etc/watcher/watcher.conf.d/00-default.conf) |grep -czPo '\\[prometheus_client\\]\\s+host\\s+=\\s+metric-storage-prometheus.watcher-kuttl-default.svc\\s+port\\s+=\\s+9090\\s+cafile\\s+=\\s+/etc/pki/ca-trust/extracted/pem/prometheus/internal-ca-bundle.pem') == 1 ]\n [ $(echo $(oc rsh -c watcher-api ${APIPOD} cat /etc/httpd/conf.d/10-watcher-wsgi-main.conf) |grep -czPo 'TimeOut 80') == 1 ]\n else\n exit 1\n fi\n ]\n \ logger.go:42: 07:12:36 | watcher/4-deploy-with-precreated-account | + oc project watcher-kuttl-default\n logger.go:42: 07:12:36 | watcher/4-deploy-with-precreated-account | Already on project \"watcher-kuttl-default\" on server \"https://api.crc.testing:6443\".\n \ logger.go:42: 07:12:36 | watcher/4-deploy-with-precreated-account | ++ oc get pods -n watcher-kuttl-default -l service=watcher-api -ocustom-columns=:metadata.name\n \ logger.go:42: 07:12:36 | watcher/4-deploy-with-precreated-account | ++ grep -v '^$'\n logger.go:42: 07:12:36 | watcher/4-deploy-with-precreated-account | ++ head -1\n logger.go:42: 07:12:36 | watcher/4-deploy-with-precreated-account | + APIPOD=watcher-kuttl-api-0\n logger.go:42: 07:12:36 | watcher/4-deploy-with-precreated-account | + '[' -n watcher-kuttl-api-0 ']'\n logger.go:42: 07:12:36 | watcher/4-deploy-with-precreated-account | ++ grep -c '^# Global config'\n logger.go:42: 07:12:36 | watcher/4-deploy-with-precreated-account | +++ oc rsh -c watcher-api watcher-kuttl-api-0 cat /etc/watcher/watcher.conf.d/01-global-custom.conf\n \ logger.go:42: 07:12:36 | watcher/4-deploy-with-precreated-account | ++ echo '#' Global config\n logger.go:42: 07:12:36 | watcher/4-deploy-with-precreated-account | + '[' 1 == 1 ']'\n logger.go:42: 07:12:36 | watcher/4-deploy-with-precreated-account | ++ grep -c '^# Service config'\n logger.go:42: 07:12:36 | watcher/4-deploy-with-precreated-account | +++ oc rsh -c watcher-api watcher-kuttl-api-0 cat /etc/watcher/watcher.conf.d/02-service-custom.conf\n \ logger.go:42: 07:12:36 | watcher/4-deploy-with-precreated-account | ++ echo '#' Service config\n logger.go:42: 07:12:36 | watcher/4-deploy-with-precreated-account | + '[' 1 == 1 ']'\n logger.go:42: 07:12:36 | watcher/4-deploy-with-precreated-account | +++ oc rsh -c watcher-api watcher-kuttl-api-0 cat /etc/watcher/watcher.conf.d/00-default.conf\n \ logger.go:42: 07:12:36 | watcher/4-deploy-with-precreated-account | ++ grep -czPo '\\[prometheus_client\\]\\s+host\\s+=\\s+metric-storage-prometheus.watcher-kuttl-default.svc\\s+port\\s+=\\s+9090\\s+cafile\\s+=\\s+/etc/pki/ca-trust/extracted/pem/prometheus/internal-ca-bundle.pem'\n \ logger.go:42: 07:12:37 | watcher/4-deploy-with-precreated-account | ++ echo '[DEFAULT]' state_path = /var/lib/watcher transport_url = 'rabbit://**********=1' control_exchange = watcher debug = True log_file = /var/log/watcher/watcher-kuttl-api.log '#' empty notification_level means that no notification will be sent notification_level = '[database]' connection = 'mysql+pymysql://watcher_test:5de8c0dffb799bdb4516f938bcffc35d@openstack.watcher-kuttl-default.svc/watcher?read_default_file=/etc/my.cnf' '[oslo_policy]' policy_file = /etc/watcher/policy.yaml.sample '[oslo_messaging_notifications]' driver = noop '[oslo_messaging_rabbit]' rabbit_quorum_queue=true rabbit_transient_quorum_queue=true amqp_durable_queues=true '[keystone_authtoken]' memcached_servers=memcached-0.memcached.watcher-kuttl-default.svc:11212 tls_enabled=true memcache_tls_certfile = /etc/pki/tls/certs/mtls.crt memcache_tls_keyfile = /etc/pki/tls/private/mtls.key memcache_tls_cafile = /etc/pki/tls/certs/mtls-ca.crt memcache_tls_enabled = true project_domain_name = Default project_name = service user_domain_name = Default password = pa**********rd username = watcher auth_url = https://keystone-internal.watcher-kuttl-default.svc:5000 interface = internal auth_type = password cafile = /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem '[watcher_clients_auth]' project_domain_name = Default project_name = service user_domain_name = Default password = pa**********rd username = watcher auth_url = https://keystone-internal.watcher-kuttl-default.svc:5000 interface = internal auth_type = password cafile = /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem '[oslo_concurrency]' lock_path = /var/lib/watcher/tmp '[watcher_datasources]' datasources = prometheus '[cache]' backend = oslo_cache.memcache_pool memcache_servers=memcached-0.memcached.watcher-kuttl-default.svc:11212 memcache_socket_timeout = 0.5 memcache_pool_connection_get_timeout = 1 enabled=true tls_enabled=true tls_certfile=/etc/pki/tls/certs/mtls.crt tls_keyfile=/etc/pki/tls/private/mtls.key tls_cafile=/etc/pki/tls/certs/mtls-ca.crt memcache_dead_retry = 30 '[prometheus_client]' host = metric-storage-prometheus.watcher-kuttl-default.svc port = 9090 cafile = /etc/pki/ca-trust/extracted/pem/prometheus/internal-ca-bundle.pem '[cinder_client]' endpoint_type = internal '[glance_client]' endpoint_type = internal '[ironic_client]' endpoint_type = internal '[keystone_client]' interface = internal '[neutron_client]' endpoint_type = internal '[nova_client]' endpoint_type = internal '[placement_client]' interface = internal '[watcher_cluster_data_model_collectors.compute]' period = 900 '[watcher_cluster_data_model_collectors.baremetal]' period = 900 '[watcher_cluster_data_model_collectors.storage]' period = 900\n logger.go:42: 07:12:37 | watcher/4-deploy-with-precreated-account | + '[' 1 == 1 ']'\n logger.go:42: 07:12:37 | watcher/4-deploy-with-precreated-account | ++ grep -czPo 'TimeOut 80'\n logger.go:42: 07:12:37 | watcher/4-deploy-with-precreated-account | +++ oc rsh -c watcher-api watcher-kuttl-api-0 cat /etc/httpd/conf.d/10-watcher-wsgi-main.conf\n \ logger.go:42: 07:12:37 | watcher/4-deploy-with-precreated-account | ++ echo '#' internal vhost watcher-internal.watcher-kuttl-default.svc configuration '' ServerName watcher-internal.watcher-kuttl-default.svc '##' Vhost docroot DocumentRoot '\"/var/www/cgi-bin\"' '#' Set the timeout for the watcher-api TimeOut 80 '##' Directories, there should at least be a declaration for /var/www/cgi-bin '' Options -Indexes +FollowSymLinks +MultiViews AllowOverride None Require all granted '' '##' Logging ErrorLog /dev/stdout ServerSignature Off CustomLog /dev/stdout combined 'env=!forwarded' CustomLog /dev/stdout proxy env=forwarded '##' set watcher log level to debug LogLevel debug '##' WSGI configuration WSGIApplicationGroup '%{GLOBAL}' WSGIDaemonProcess internal display-name=internal group=watcher processes=2 threads=1 user=watcher WSGIProcessGroup internal WSGIScriptAlias / '\"/usr/bin/watcher-api-wsgi\"' '' '#' public vhost watcher-public.watcher-kuttl-default.svc configuration '' ServerName watcher-public.watcher-kuttl-default.svc '##' Vhost docroot DocumentRoot '\"/var/www/cgi-bin\"' '#' Set the timeout for the watcher-api TimeOut 80 '##' Directories, there should at least be a declaration for /var/www/cgi-bin '' Options -Indexes +FollowSymLinks +MultiViews AllowOverride None Require all granted '' '##' Logging ErrorLog /dev/stdout ServerSignature Off CustomLog /dev/stdout combined 'env=!forwarded' CustomLog /dev/stdout proxy env=forwarded '##' set watcher log level to debug LogLevel debug '##' WSGI configuration WSGIApplicationGroup '%{GLOBAL}' WSGIDaemonProcess public display-name=public group=watcher processes=2 threads=1 user=watcher WSGIProcessGroup public WSGIScriptAlias / '\"/usr/bin/watcher-api-wsgi\"' ''\n logger.go:42: 07:12:37 | watcher/4-deploy-with-precreated-account | + '[' 1 == 1 ']'\n logger.go:42: 07:12:38 | watcher/4-deploy-with-precreated-account | running command: [sh -c set -euxo pipefail\n oc project watcher-kuttl-default\n APIPOD=$(oc get pods -n watcher-kuttl-default -l \"service=watcher-api\" -ocustom-columns=:metadata.name|grep -v ^$|head -1)\n if [ -n \"${APIPOD}\" ]; then\n [ $(echo $(oc rsh -c watcher-api ${APIPOD} cat /etc/watcher/watcher.conf.d/01-global-custom.conf) |grep -c \"^# Global config\") == 1 ]\n [ $(echo $(oc rsh -c watcher-api ${APIPOD} cat /etc/watcher/watcher.conf.d/02-service-custom.conf) |grep -c \"^# Service config\") == 1 ]\n [ $(echo $(oc rsh -c watcher-api ${APIPOD} cat /etc/watcher/watcher.conf.d/00-default.conf) |grep -czPo '\\[prometheus_client\\]\\s+host\\s+=\\s+metric-storage-prometheus.watcher-kuttl-default.svc\\s+port\\s+=\\s+9090\\s+cafile\\s+=\\s+/etc/pki/ca-trust/extracted/pem/prometheus/internal-ca-bundle.pem') == 1 ]\n [ $(echo $(oc rsh -c watcher-api ${APIPOD} cat /etc/httpd/conf.d/10-watcher-wsgi-main.conf) |grep -czPo 'TimeOut 80') == 1 ]\n else\n exit 1\n fi\n ]\n \ logger.go:42: 07:12:38 | watcher/4-deploy-with-precreated-account | + oc project watcher-kuttl-default\n logger.go:42: 07:12:38 | watcher/4-deploy-with-precreated-account | Already on project \"watcher-kuttl-default\" on server \"https://api.crc.testing:6443\".\n \ logger.go:42: 07:12:38 | watcher/4-deploy-with-precreated-account | ++ oc get pods -n watcher-kuttl-default -l service=watcher-api -ocustom-columns=:metadata.name\n \ logger.go:42: 07:12:38 | watcher/4-deploy-with-precreated-account | ++ head -1\n logger.go:42: 07:12:38 | watcher/4-deploy-with-precreated-account | ++ grep -v '^$'\n logger.go:42: 07:12:38 | watcher/4-deploy-with-precreated-account | + APIPOD=watcher-kuttl-api-0\n logger.go:42: 07:12:38 | watcher/4-deploy-with-precreated-account | + '[' -n watcher-kuttl-api-0 ']'\n logger.go:42: 07:12:38 | watcher/4-deploy-with-precreated-account | ++ grep -c '^# Global config'\n logger.go:42: 07:12:38 | watcher/4-deploy-with-precreated-account | +++ oc rsh -c watcher-api watcher-kuttl-api-0 cat /etc/watcher/watcher.conf.d/01-global-custom.conf\n \ logger.go:42: 07:12:38 | watcher/4-deploy-with-precreated-account | ++ echo '#' Global config\n logger.go:42: 07:12:38 | watcher/4-deploy-with-precreated-account | + '[' 1 == 1 ']'\n logger.go:42: 07:12:38 | watcher/4-deploy-with-precreated-account | ++ grep -c '^# Service config'\n logger.go:42: 07:12:38 | watcher/4-deploy-with-precreated-account | +++ oc rsh -c watcher-api watcher-kuttl-api-0 cat /etc/watcher/watcher.conf.d/02-service-custom.conf\n \ logger.go:42: 07:12:39 | watcher/4-deploy-with-precreated-account | ++ echo '#' Service config\n logger.go:42: 07:12:39 | watcher/4-deploy-with-precreated-account | + '[' 1 == 1 ']'\n logger.go:42: 07:12:39 | watcher/4-deploy-with-precreated-account | ++ grep -czPo '\\[prometheus_client\\]\\s+host\\s+=\\s+metric-storage-prometheus.watcher-kuttl-default.svc\\s+port\\s+=\\s+9090\\s+cafile\\s+=\\s+/etc/pki/ca-trust/extracted/pem/prometheus/internal-ca-bundle.pem'\n \ logger.go:42: 07:12:39 | watcher/4-deploy-with-precreated-account | +++ oc rsh -c watcher-api watcher-kuttl-api-0 cat /etc/watcher/watcher.conf.d/00-default.conf\n \ logger.go:42: 07:12:39 | watcher/4-deploy-with-precreated-account | ++ echo '[DEFAULT]' state_path = /var/lib/watcher transport_url = 'rabbit://**********=1' control_exchange = watcher debug = True log_file = /var/log/watcher/watcher-kuttl-api.log '#' empty notification_level means that no notification will be sent notification_level = '[database]' connection = 'mysql+pymysql://watcher_test:5de8c0dffb799bdb4516f938bcffc35d@openstack.watcher-kuttl-default.svc/watcher?read_default_file=/etc/my.cnf' '[oslo_policy]' policy_file = /etc/watcher/policy.yaml.sample '[oslo_messaging_notifications]' driver = noop '[oslo_messaging_rabbit]' rabbit_quorum_queue=true rabbit_transient_quorum_queue=true amqp_durable_queues=true '[keystone_authtoken]' memcached_servers=memcached-0.memcached.watcher-kuttl-default.svc:11212 tls_enabled=true memcache_tls_certfile = /etc/pki/tls/certs/mtls.crt memcache_tls_keyfile = /etc/pki/tls/private/mtls.key memcache_tls_cafile = /etc/pki/tls/certs/mtls-ca.crt memcache_tls_enabled = true project_domain_name = Default project_name = service user_domain_name = Default password = pa**********rd username = watcher auth_url = https://keystone-internal.watcher-kuttl-default.svc:5000 interface = internal auth_type = password cafile = /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem '[watcher_clients_auth]' project_domain_name = Default project_name = service user_domain_name = Default password = pa**********rd username = watcher auth_url = https://keystone-internal.watcher-kuttl-default.svc:5000 interface = internal auth_type = password cafile = /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem '[oslo_concurrency]' lock_path = /var/lib/watcher/tmp '[watcher_datasources]' datasources = prometheus '[cache]' backend = oslo_cache.memcache_pool memcache_servers=memcached-0.memcached.watcher-kuttl-default.svc:11212 memcache_socket_timeout = 0.5 memcache_pool_connection_get_timeout = 1 enabled=true tls_enabled=true tls_certfile=/etc/pki/tls/certs/mtls.crt tls_keyfile=/etc/pki/tls/private/mtls.key tls_cafile=/etc/pki/tls/certs/mtls-ca.crt memcache_dead_retry = 30 '[prometheus_client]' host = metric-storage-prometheus.watcher-kuttl-default.svc port = 9090 cafile = /etc/pki/ca-trust/extracted/pem/prometheus/internal-ca-bundle.pem '[cinder_client]' endpoint_type = internal '[glance_client]' endpoint_type = internal '[ironic_client]' endpoint_type = internal '[keystone_client]' interface = internal '[neutron_client]' endpoint_type = internal '[nova_client]' endpoint_type = internal '[placement_client]' interface = internal '[watcher_cluster_data_model_collectors.compute]' period = 900 '[watcher_cluster_data_model_collectors.baremetal]' period = 900 '[watcher_cluster_data_model_collectors.storage]' period = 900\n logger.go:42: 07:12:39 | watcher/4-deploy-with-precreated-account | + '[' 1 == 1 ']'\n logger.go:42: 07:12:39 | watcher/4-deploy-with-precreated-account | +++ oc rsh -c watcher-api watcher-kuttl-api-0 cat /etc/httpd/conf.d/10-watcher-wsgi-main.conf\n \ logger.go:42: 07:12:39 | watcher/4-deploy-with-precreated-account | ++ grep -czPo 'TimeOut 80'\n logger.go:42: 07:12:39 | watcher/4-deploy-with-precreated-account | ++ echo '#' internal vhost watcher-internal.watcher-kuttl-default.svc configuration '' ServerName watcher-internal.watcher-kuttl-default.svc '##' Vhost docroot DocumentRoot '\"/var/www/cgi-bin\"' '#' Set the timeout for the watcher-api TimeOut 80 '##' Directories, there should at least be a declaration for /var/www/cgi-bin '' Options -Indexes +FollowSymLinks +MultiViews AllowOverride None Require all granted '' '##' Logging ErrorLog /dev/stdout ServerSignature Off CustomLog /dev/stdout combined 'env=!forwarded' CustomLog /dev/stdout proxy env=forwarded '##' set watcher log level to debug LogLevel debug '##' WSGI configuration WSGIApplicationGroup '%{GLOBAL}' WSGIDaemonProcess internal display-name=internal group=watcher processes=2 threads=1 user=watcher WSGIProcessGroup internal WSGIScriptAlias / '\"/usr/bin/watcher-api-wsgi\"' '' '#' public vhost watcher-public.watcher-kuttl-default.svc configuration '' ServerName watcher-public.watcher-kuttl-default.svc '##' Vhost docroot DocumentRoot '\"/var/www/cgi-bin\"' '#' Set the timeout for the watcher-api TimeOut 80 '##' Directories, there should at least be a declaration for /var/www/cgi-bin '' Options -Indexes +FollowSymLinks +MultiViews AllowOverride None Require all granted '' '##' Logging ErrorLog /dev/stdout ServerSignature Off CustomLog /dev/stdout combined 'env=!forwarded' CustomLog /dev/stdout proxy env=forwarded '##' set watcher log level to debug LogLevel debug '##' WSGI configuration WSGIApplicationGroup '%{GLOBAL}' WSGIDaemonProcess public display-name=public group=watcher processes=2 threads=1 user=watcher WSGIProcessGroup public WSGIScriptAlias / '\"/usr/bin/watcher-api-wsgi\"' ''\n logger.go:42: 07:12:39 | watcher/4-deploy-with-precreated-account | + '[' 1 == 1 ']'\n logger.go:42: 07:12:40 | watcher/4-deploy-with-precreated-account | running command: [sh -c set -euxo pipefail\n oc project watcher-kuttl-default\n APIPOD=$(oc get pods -n watcher-kuttl-default -l \"service=watcher-api\" -ocustom-columns=:metadata.name|grep -v ^$|head -1)\n if [ -n \"${APIPOD}\" ]; then\n [ $(echo $(oc rsh -c watcher-api ${APIPOD} cat /etc/watcher/watcher.conf.d/01-global-custom.conf) |grep -c \"^# Global config\") == 1 ]\n [ $(echo $(oc rsh -c watcher-api ${APIPOD} cat /etc/watcher/watcher.conf.d/02-service-custom.conf) |grep -c \"^# Service config\") == 1 ]\n [ $(echo $(oc rsh -c watcher-api ${APIPOD} cat /etc/watcher/watcher.conf.d/00-default.conf) |grep -czPo '\\[prometheus_client\\]\\s+host\\s+=\\s+metric-storage-prometheus.watcher-kuttl-default.svc\\s+port\\s+=\\s+9090\\s+cafile\\s+=\\s+/etc/pki/ca-trust/extracted/pem/prometheus/internal-ca-bundle.pem') == 1 ]\n [ $(echo $(oc rsh -c watcher-api ${APIPOD} cat /etc/httpd/conf.d/10-watcher-wsgi-main.conf) |grep -czPo 'TimeOut 80') == 1 ]\n else\n exit 1\n fi\n ]\n \ logger.go:42: 07:12:40 | watcher/4-deploy-with-precreated-account | + oc project watcher-kuttl-default\n logger.go:42: 07:12:40 | watcher/4-deploy-with-precreated-account | Already on project \"watcher-kuttl-default\" on server \"https://api.crc.testing:6443\".\n \ logger.go:42: 07:12:40 | watcher/4-deploy-with-precreated-account | ++ oc get pods -n watcher-kuttl-default -l service=watcher-api -ocustom-columns=:metadata.name\n \ logger.go:42: 07:12:40 | watcher/4-deploy-with-precreated-account | ++ head -1\n logger.go:42: 07:12:40 | watcher/4-deploy-with-precreated-account | ++ grep -v '^$'\n logger.go:42: 07:12:41 | watcher/4-deploy-with-precreated-account | + APIPOD=watcher-kuttl-api-0\n logger.go:42: 07:12:41 | watcher/4-deploy-with-precreated-account | + '[' -n watcher-kuttl-api-0 ']'\n logger.go:42: 07:12:41 | watcher/4-deploy-with-precreated-account | ++ grep -c '^# Global config'\n logger.go:42: 07:12:41 | watcher/4-deploy-with-precreated-account | +++ oc rsh -c watcher-api watcher-kuttl-api-0 cat /etc/watcher/watcher.conf.d/01-global-custom.conf\n \ logger.go:42: 07:12:41 | watcher/4-deploy-with-precreated-account | ++ echo '#' Global config\n logger.go:42: 07:12:41 | watcher/4-deploy-with-precreated-account | + '[' 1 == 1 ']'\n logger.go:42: 07:12:41 | watcher/4-deploy-with-precreated-account | ++ grep -c '^# Service config'\n logger.go:42: 07:12:41 | watcher/4-deploy-with-precreated-account | +++ oc rsh -c watcher-api watcher-kuttl-api-0 cat /etc/watcher/watcher.conf.d/02-service-custom.conf\n \ logger.go:42: 07:12:41 | watcher/4-deploy-with-precreated-account | ++ echo '#' Service config\n logger.go:42: 07:12:41 | watcher/4-deploy-with-precreated-account | + '[' 1 == 1 ']'\n logger.go:42: 07:12:41 | watcher/4-deploy-with-precreated-account | ++ grep -czPo '\\[prometheus_client\\]\\s+host\\s+=\\s+metric-storage-prometheus.watcher-kuttl-default.svc\\s+port\\s+=\\s+9090\\s+cafile\\s+=\\s+/etc/pki/ca-trust/extracted/pem/prometheus/internal-ca-bundle.pem'\n \ logger.go:42: 07:12:41 | watcher/4-deploy-with-precreated-account | +++ oc rsh -c watcher-api watcher-kuttl-api-0 cat /etc/watcher/watcher.conf.d/00-default.conf\n \ logger.go:42: 07:12:41 | watcher/4-deploy-with-precreated-account | ++ echo '[DEFAULT]' state_path = /var/lib/watcher transport_url = 'rabbit://**********=1' control_exchange = watcher debug = True log_file = /var/log/watcher/watcher-kuttl-api.log '#' empty notification_level means that no notification will be sent notification_level = '[database]' connection = 'mysql+pymysql://watcher_test:5de8c0dffb799bdb4516f938bcffc35d@openstack.watcher-kuttl-default.svc/watcher?read_default_file=/etc/my.cnf' '[oslo_policy]' policy_file = /etc/watcher/policy.yaml.sample '[oslo_messaging_notifications]' driver = noop '[oslo_messaging_rabbit]' rabbit_quorum_queue=true rabbit_transient_quorum_queue=true amqp_durable_queues=true '[keystone_authtoken]' memcached_servers=memcached-0.memcached.watcher-kuttl-default.svc:11212 tls_enabled=true memcache_tls_certfile = /etc/pki/tls/certs/mtls.crt memcache_tls_keyfile = /etc/pki/tls/private/mtls.key memcache_tls_cafile = /etc/pki/tls/certs/mtls-ca.crt memcache_tls_enabled = true project_domain_name = Default project_name = service user_domain_name = Default password = pa**********rd username = watcher auth_url = https://keystone-internal.watcher-kuttl-default.svc:5000 interface = internal auth_type = password cafile = /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem '[watcher_clients_auth]' project_domain_name = Default project_name = service user_domain_name = Default password = pa**********rd username = watcher auth_url = https://keystone-internal.watcher-kuttl-default.svc:5000 interface = internal auth_type = password cafile = /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem '[oslo_concurrency]' lock_path = /var/lib/watcher/tmp '[watcher_datasources]' datasources = prometheus '[cache]' backend = oslo_cache.memcache_pool memcache_servers=memcached-0.memcached.watcher-kuttl-default.svc:11212 memcache_socket_timeout = 0.5 memcache_pool_connection_get_timeout = 1 enabled=true tls_enabled=true tls_certfile=/etc/pki/tls/certs/mtls.crt tls_keyfile=/etc/pki/tls/private/mtls.key tls_cafile=/etc/pki/tls/certs/mtls-ca.crt memcache_dead_retry = 30 '[prometheus_client]' host = metric-storage-prometheus.watcher-kuttl-default.svc port = 9090 cafile = /etc/pki/ca-trust/extracted/pem/prometheus/internal-ca-bundle.pem '[cinder_client]' endpoint_type = internal '[glance_client]' endpoint_type = internal '[ironic_client]' endpoint_type = internal '[keystone_client]' interface = internal '[neutron_client]' endpoint_type = internal '[nova_client]' endpoint_type = internal '[placement_client]' interface = internal '[watcher_cluster_data_model_collectors.compute]' period = 900 '[watcher_cluster_data_model_collectors.baremetal]' period = 900 '[watcher_cluster_data_model_collectors.storage]' period = 900\n logger.go:42: 07:12:41 | watcher/4-deploy-with-precreated-account | + '[' 1 == 1 ']'\n logger.go:42: 07:12:41 | watcher/4-deploy-with-precreated-account | +++ oc rsh -c watcher-api watcher-kuttl-api-0 cat /etc/httpd/conf.d/10-watcher-wsgi-main.conf\n \ logger.go:42: 07:12:41 | watcher/4-deploy-with-precreated-account | ++ grep -czPo 'TimeOut 80'\n logger.go:42: 07:12:41 | watcher/4-deploy-with-precreated-account | ++ echo '#' internal vhost watcher-internal.watcher-kuttl-default.svc configuration '' ServerName watcher-internal.watcher-kuttl-default.svc '##' Vhost docroot DocumentRoot '\"/var/www/cgi-bin\"' '#' Set the timeout for the watcher-api TimeOut 80 '##' Directories, there should at least be a declaration for /var/www/cgi-bin '' Options -Indexes +FollowSymLinks +MultiViews AllowOverride None Require all granted '' '##' Logging ErrorLog /dev/stdout ServerSignature Off CustomLog /dev/stdout combined 'env=!forwarded' CustomLog /dev/stdout proxy env=forwarded '##' set watcher log level to debug LogLevel debug '##' WSGI configuration WSGIApplicationGroup '%{GLOBAL}' WSGIDaemonProcess internal display-name=internal group=watcher processes=2 threads=1 user=watcher WSGIProcessGroup internal WSGIScriptAlias / '\"/usr/bin/watcher-api-wsgi\"' '' '#' public vhost watcher-public.watcher-kuttl-default.svc configuration '' ServerName watcher-public.watcher-kuttl-default.svc '##' Vhost docroot DocumentRoot '\"/var/www/cgi-bin\"' '#' Set the timeout for the watcher-api TimeOut 80 '##' Directories, there should at least be a declaration for /var/www/cgi-bin '' Options -Indexes +FollowSymLinks +MultiViews AllowOverride None Require all granted '' '##' Logging ErrorLog /dev/stdout ServerSignature Off CustomLog /dev/stdout combined 'env=!forwarded' CustomLog /dev/stdout proxy env=forwarded '##' set watcher log level to debug LogLevel debug '##' WSGI configuration WSGIApplicationGroup '%{GLOBAL}' WSGIDaemonProcess public display-name=public group=watcher processes=2 threads=1 user=watcher WSGIProcessGroup public WSGIScriptAlias / '\"/usr/bin/watcher-api-wsgi\"' ''\n logger.go:42: 07:12:41 | watcher/4-deploy-with-precreated-account | + '[' 1 == 1 ']'\n logger.go:42: 07:12:43 | watcher/4-deploy-with-precreated-account | running command: [sh -c set -euxo pipefail\n oc project watcher-kuttl-default\n APIPOD=$(oc get pods -n watcher-kuttl-default -l \"service=watcher-api\" -ocustom-columns=:metadata.name|grep -v ^$|head -1)\n if [ -n \"${APIPOD}\" ]; then\n [ $(echo $(oc rsh -c watcher-api ${APIPOD} cat /etc/watcher/watcher.conf.d/01-global-custom.conf) |grep -c \"^# Global config\") == 1 ]\n [ $(echo $(oc rsh -c watcher-api ${APIPOD} cat /etc/watcher/watcher.conf.d/02-service-custom.conf) |grep -c \"^# Service config\") == 1 ]\n [ $(echo $(oc rsh -c watcher-api ${APIPOD} cat /etc/watcher/watcher.conf.d/00-default.conf) |grep -czPo '\\[prometheus_client\\]\\s+host\\s+=\\s+metric-storage-prometheus.watcher-kuttl-default.svc\\s+port\\s+=\\s+9090\\s+cafile\\s+=\\s+/etc/pki/ca-trust/extracted/pem/prometheus/internal-ca-bundle.pem') == 1 ]\n [ $(echo $(oc rsh -c watcher-api ${APIPOD} cat /etc/httpd/conf.d/10-watcher-wsgi-main.conf) |grep -czPo 'TimeOut 80') == 1 ]\n else\n exit 1\n fi\n ]\n \ logger.go:42: 07:12:43 | watcher/4-deploy-with-precreated-account | + oc project watcher-kuttl-default\n logger.go:42: 07:12:43 | watcher/4-deploy-with-precreated-account | Already on project \"watcher-kuttl-default\" on server \"https://api.crc.testing:6443\".\n \ logger.go:42: 07:12:43 | watcher/4-deploy-with-precreated-account | ++ oc get pods -n watcher-kuttl-default -l service=watcher-api -ocustom-columns=:metadata.name\n \ logger.go:42: 07:12:43 | watcher/4-deploy-with-precreated-account | ++ head -1\n logger.go:42: 07:12:43 | watcher/4-deploy-with-precreated-account | ++ grep -v '^$'\n logger.go:42: 07:12:43 | watcher/4-deploy-with-precreated-account | + APIPOD=watcher-kuttl-api-0\n logger.go:42: 07:12:43 | watcher/4-deploy-with-precreated-account | + '[' -n watcher-kuttl-api-0 ']'\n logger.go:42: 07:12:43 | watcher/4-deploy-with-precreated-account | ++ grep -c '^# Global config'\n logger.go:42: 07:12:43 | watcher/4-deploy-with-precreated-account | +++ oc rsh -c watcher-api watcher-kuttl-api-0 cat /etc/watcher/watcher.conf.d/01-global-custom.conf\n \ logger.go:42: 07:12:43 | watcher/4-deploy-with-precreated-account | ++ echo '#' Global config\n logger.go:42: 07:12:43 | watcher/4-deploy-with-precreated-account | + '[' 1 == 1 ']'\n logger.go:42: 07:12:43 | watcher/4-deploy-with-precreated-account | ++ grep -c '^# Service config'\n logger.go:42: 07:12:43 | watcher/4-deploy-with-precreated-account | +++ oc rsh -c watcher-api watcher-kuttl-api-0 cat /etc/watcher/watcher.conf.d/02-service-custom.conf\n \ logger.go:42: 07:12:43 | watcher/4-deploy-with-precreated-account | ++ echo '#' Service config\n logger.go:42: 07:12:43 | watcher/4-deploy-with-precreated-account | + '[' 1 == 1 ']'\n logger.go:42: 07:12:43 | watcher/4-deploy-with-precreated-account | ++ grep -czPo '\\[prometheus_client\\]\\s+host\\s+=\\s+metric-storage-prometheus.watcher-kuttl-default.svc\\s+port\\s+=\\s+9090\\s+cafile\\s+=\\s+/etc/pki/ca-trust/extracted/pem/prometheus/internal-ca-bundle.pem'\n \ logger.go:42: 07:12:43 | watcher/4-deploy-with-precreated-account | +++ oc rsh -c watcher-api watcher-kuttl-api-0 cat /etc/watcher/watcher.conf.d/00-default.conf\n \ logger.go:42: 07:12:44 | watcher/4-deploy-with-precreated-account | ++ echo '[DEFAULT]' state_path = /var/lib/watcher transport_url = 'rabbit://**********=1' control_exchange = watcher debug = True log_file = /var/log/watcher/watcher-kuttl-api.log '#' empty notification_level means that no notification will be sent notification_level = '[database]' connection = 'mysql+pymysql://watcher_test:5de8c0dffb799bdb4516f938bcffc35d@openstack.watcher-kuttl-default.svc/watcher?read_default_file=/etc/my.cnf' '[oslo_policy]' policy_file = /etc/watcher/policy.yaml.sample '[oslo_messaging_notifications]' driver = noop '[oslo_messaging_rabbit]' rabbit_quorum_queue=true rabbit_transient_quorum_queue=true amqp_durable_queues=true '[keystone_authtoken]' memcached_servers=memcached-0.memcached.watcher-kuttl-default.svc:11212 tls_enabled=true memcache_tls_certfile = /etc/pki/tls/certs/mtls.crt memcache_tls_keyfile = /etc/pki/tls/private/mtls.key memcache_tls_cafile = /etc/pki/tls/certs/mtls-ca.crt memcache_tls_enabled = true project_domain_name = Default project_name = service user_domain_name = Default password = pa**********rd username = watcher auth_url = https://keystone-internal.watcher-kuttl-default.svc:5000 interface = internal auth_type = password cafile = /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem '[watcher_clients_auth]' project_domain_name = Default project_name = service user_domain_name = Default password = pa**********rd username = watcher auth_url = https://keystone-internal.watcher-kuttl-default.svc:5000 interface = internal auth_type = password cafile = /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem '[oslo_concurrency]' lock_path = /var/lib/watcher/tmp '[watcher_datasources]' datasources = prometheus '[cache]' backend = oslo_cache.memcache_pool memcache_servers=memcached-0.memcached.watcher-kuttl-default.svc:11212 memcache_socket_timeout = 0.5 memcache_pool_connection_get_timeout = 1 enabled=true tls_enabled=true tls_certfile=/etc/pki/tls/certs/mtls.crt tls_keyfile=/etc/pki/tls/private/mtls.key tls_cafile=/etc/pki/tls/certs/mtls-ca.crt memcache_dead_retry = 30 '[prometheus_client]' host = metric-storage-prometheus.watcher-kuttl-default.svc port = 9090 cafile = /etc/pki/ca-trust/extracted/pem/prometheus/internal-ca-bundle.pem '[cinder_client]' endpoint_type = internal '[glance_client]' endpoint_type = internal '[ironic_client]' endpoint_type = internal '[keystone_client]' interface = internal '[neutron_client]' endpoint_type = internal '[nova_client]' endpoint_type = internal '[placement_client]' interface = internal '[watcher_cluster_data_model_collectors.compute]' period = 900 '[watcher_cluster_data_model_collectors.baremetal]' period = 900 '[watcher_cluster_data_model_collectors.storage]' period = 900\n logger.go:42: 07:12:44 | watcher/4-deploy-with-precreated-account | + '[' 1 == 1 ']'\n logger.go:42: 07:12:44 | watcher/4-deploy-with-precreated-account | ++ grep -czPo 'TimeOut 80'\n logger.go:42: 07:12:44 | watcher/4-deploy-with-precreated-account | +++ oc rsh -c watcher-api watcher-kuttl-api-0 cat /etc/httpd/conf.d/10-watcher-wsgi-main.conf\n \ logger.go:42: 07:12:44 | watcher/4-deploy-with-precreated-account | ++ echo '#' internal vhost watcher-internal.watcher-kuttl-default.svc configuration '' ServerName watcher-internal.watcher-kuttl-default.svc '##' Vhost docroot DocumentRoot '\"/var/www/cgi-bin\"' '#' Set the timeout for the watcher-api TimeOut 80 '##' Directories, there should at least be a declaration for /var/www/cgi-bin '' Options -Indexes +FollowSymLinks +MultiViews AllowOverride None Require all granted '' '##' Logging ErrorLog /dev/stdout ServerSignature Off CustomLog /dev/stdout combined 'env=!forwarded' CustomLog /dev/stdout proxy env=forwarded '##' set watcher log level to debug LogLevel debug '##' WSGI configuration WSGIApplicationGroup '%{GLOBAL}' WSGIDaemonProcess internal display-name=internal group=watcher processes=2 threads=1 user=watcher WSGIProcessGroup internal WSGIScriptAlias / '\"/usr/bin/watcher-api-wsgi\"' '' '#' public vhost watcher-public.watcher-kuttl-default.svc configuration '' ServerName watcher-public.watcher-kuttl-default.svc '##' Vhost docroot DocumentRoot '\"/var/www/cgi-bin\"' '#' Set the timeout for the watcher-api TimeOut 80 '##' Directories, there should at least be a declaration for /var/www/cgi-bin '' Options -Indexes +FollowSymLinks +MultiViews AllowOverride None Require all granted '' '##' Logging ErrorLog /dev/stdout ServerSignature Off CustomLog /dev/stdout combined 'env=!forwarded' CustomLog /dev/stdout proxy env=forwarded '##' set watcher log level to debug LogLevel debug '##' WSGI configuration WSGIApplicationGroup '%{GLOBAL}' WSGIDaemonProcess public display-name=public group=watcher processes=2 threads=1 user=watcher WSGIProcessGroup public WSGIScriptAlias / '\"/usr/bin/watcher-api-wsgi\"' ''\n logger.go:42: 07:12:44 | watcher/4-deploy-with-precreated-account | + '[' 1 == 1 ']'\n logger.go:42: 07:12:45 | watcher/4-deploy-with-precreated-account | running command: [sh -c set -euxo pipefail\n oc project watcher-kuttl-default\n APIPOD=$(oc get pods -n watcher-kuttl-default -l \"service=watcher-api\" -ocustom-columns=:metadata.name|grep -v ^$|head -1)\n if [ -n \"${APIPOD}\" ]; then\n [ $(echo $(oc rsh -c watcher-api ${APIPOD} cat /etc/watcher/watcher.conf.d/01-global-custom.conf) |grep -c \"^# Global config\") == 1 ]\n [ $(echo $(oc rsh -c watcher-api ${APIPOD} cat /etc/watcher/watcher.conf.d/02-service-custom.conf) |grep -c \"^# Service config\") == 1 ]\n [ $(echo $(oc rsh -c watcher-api ${APIPOD} cat /etc/watcher/watcher.conf.d/00-default.conf) |grep -czPo '\\[prometheus_client\\]\\s+host\\s+=\\s+metric-storage-prometheus.watcher-kuttl-default.svc\\s+port\\s+=\\s+9090\\s+cafile\\s+=\\s+/etc/pki/ca-trust/extracted/pem/prometheus/internal-ca-bundle.pem') == 1 ]\n [ $(echo $(oc rsh -c watcher-api ${APIPOD} cat /etc/httpd/conf.d/10-watcher-wsgi-main.conf) |grep -czPo 'TimeOut 80') == 1 ]\n else\n exit 1\n fi\n ]\n \ logger.go:42: 07:12:45 | watcher/4-deploy-with-precreated-account | + oc project watcher-kuttl-default\n logger.go:42: 07:12:45 | watcher/4-deploy-with-precreated-account | Already on project \"watcher-kuttl-default\" on server \"https://api.crc.testing:6443\".\n \ logger.go:42: 07:12:45 | watcher/4-deploy-with-precreated-account | ++ oc get pods -n watcher-kuttl-default -l service=watcher-api -ocustom-columns=:metadata.name\n \ logger.go:42: 07:12:45 | watcher/4-deploy-with-precreated-account | ++ head -1\n logger.go:42: 07:12:45 | watcher/4-deploy-with-precreated-account | ++ grep -v '^$'\n logger.go:42: 07:12:45 | watcher/4-deploy-with-precreated-account | + APIPOD=watcher-kuttl-api-0\n logger.go:42: 07:12:45 | watcher/4-deploy-with-precreated-account | + '[' -n watcher-kuttl-api-0 ']'\n logger.go:42: 07:12:45 | watcher/4-deploy-with-precreated-account | ++ grep -c '^# Global config'\n logger.go:42: 07:12:45 | watcher/4-deploy-with-precreated-account | +++ oc rsh -c watcher-api watcher-kuttl-api-0 cat /etc/watcher/watcher.conf.d/01-global-custom.conf\n \ logger.go:42: 07:12:45 | watcher/4-deploy-with-precreated-account | ++ echo '#' Global config\n logger.go:42: 07:12:45 | watcher/4-deploy-with-precreated-account | + '[' 1 == 1 ']'\n logger.go:42: 07:12:45 | watcher/4-deploy-with-precreated-account | ++ grep -c '^# Service config'\n logger.go:42: 07:12:45 | watcher/4-deploy-with-precreated-account | +++ oc rsh -c watcher-api watcher-kuttl-api-0 cat /etc/watcher/watcher.conf.d/02-service-custom.conf\n \ logger.go:42: 07:12:46 | watcher/4-deploy-with-precreated-account | ++ echo '#' Service config\n logger.go:42: 07:12:46 | watcher/4-deploy-with-precreated-account | + '[' 1 == 1 ']'\n logger.go:42: 07:12:46 | watcher/4-deploy-with-precreated-account | +++ oc rsh -c watcher-api watcher-kuttl-api-0 cat /etc/watcher/watcher.conf.d/00-default.conf\n \ logger.go:42: 07:12:46 | watcher/4-deploy-with-precreated-account | ++ grep -czPo '\\[prometheus_client\\]\\s+host\\s+=\\s+metric-storage-prometheus.watcher-kuttl-default.svc\\s+port\\s+=\\s+9090\\s+cafile\\s+=\\s+/etc/pki/ca-trust/extracted/pem/prometheus/internal-ca-bundle.pem'\n \ logger.go:42: 07:12:46 | watcher/4-deploy-with-precreated-account | ++ echo '[DEFAULT]' state_path = /var/lib/watcher transport_url = 'rabbit://**********=1' control_exchange = watcher debug = True log_file = /var/log/watcher/watcher-kuttl-api.log '#' empty notification_level means that no notification will be sent notification_level = '[database]' connection = 'mysql+pymysql://watcher_test:5de8c0dffb799bdb4516f938bcffc35d@openstack.watcher-kuttl-default.svc/watcher?read_default_file=/etc/my.cnf' '[oslo_policy]' policy_file = /etc/watcher/policy.yaml.sample '[oslo_messaging_notifications]' driver = noop '[oslo_messaging_rabbit]' rabbit_quorum_queue=true rabbit_transient_quorum_queue=true amqp_durable_queues=true '[keystone_authtoken]' memcached_servers=memcached-0.memcached.watcher-kuttl-default.svc:11212 tls_enabled=true memcache_tls_certfile = /etc/pki/tls/certs/mtls.crt memcache_tls_keyfile = /etc/pki/tls/private/mtls.key memcache_tls_cafile = /etc/pki/tls/certs/mtls-ca.crt memcache_tls_enabled = true project_domain_name = Default project_name = service user_domain_name = Default password = pa**********rd username = watcher auth_url = https://keystone-internal.watcher-kuttl-default.svc:5000 interface = internal auth_type = password cafile = /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem '[watcher_clients_auth]' project_domain_name = Default project_name = service user_domain_name = Default password = pa**********rd username = watcher auth_url = https://keystone-internal.watcher-kuttl-default.svc:5000 interface = internal auth_type = password cafile = /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem '[oslo_concurrency]' lock_path = /var/lib/watcher/tmp '[watcher_datasources]' datasources = prometheus '[cache]' backend = oslo_cache.memcache_pool memcache_servers=memcached-0.memcached.watcher-kuttl-default.svc:11212 memcache_socket_timeout = 0.5 memcache_pool_connection_get_timeout = 1 enabled=true tls_enabled=true tls_certfile=/etc/pki/tls/certs/mtls.crt tls_keyfile=/etc/pki/tls/private/mtls.key tls_cafile=/etc/pki/tls/certs/mtls-ca.crt memcache_dead_retry = 30 '[prometheus_client]' host = metric-storage-prometheus.watcher-kuttl-default.svc port = 9090 cafile = /etc/pki/ca-trust/extracted/pem/prometheus/internal-ca-bundle.pem '[cinder_client]' endpoint_type = internal '[glance_client]' endpoint_type = internal '[ironic_client]' endpoint_type = internal '[keystone_client]' interface = internal '[neutron_client]' endpoint_type = internal '[nova_client]' endpoint_type = internal '[placement_client]' interface = internal '[watcher_cluster_data_model_collectors.compute]' period = 900 '[watcher_cluster_data_model_collectors.baremetal]' period = 900 '[watcher_cluster_data_model_collectors.storage]' period = 900\n logger.go:42: 07:12:46 | watcher/4-deploy-with-precreated-account | + '[' 1 == 1 ']'\n logger.go:42: 07:12:46 | watcher/4-deploy-with-precreated-account | ++ grep -czPo 'TimeOut 80'\n logger.go:42: 07:12:46 | watcher/4-deploy-with-precreated-account | +++ oc rsh -c watcher-api watcher-kuttl-api-0 cat /etc/httpd/conf.d/10-watcher-wsgi-main.conf\n \ logger.go:42: 07:12:47 | watcher/4-deploy-with-precreated-account | ++ echo '#' internal vhost watcher-internal.watcher-kuttl-default.svc configuration '' ServerName watcher-internal.watcher-kuttl-default.svc '##' Vhost docroot DocumentRoot '\"/var/www/cgi-bin\"' '#' Set the timeout for the watcher-api TimeOut 80 '##' Directories, there should at least be a declaration for /var/www/cgi-bin '' Options -Indexes +FollowSymLinks +MultiViews AllowOverride None Require all granted '' '##' Logging ErrorLog /dev/stdout ServerSignature Off CustomLog /dev/stdout combined 'env=!forwarded' CustomLog /dev/stdout proxy env=forwarded '##' set watcher log level to debug LogLevel debug '##' WSGI configuration WSGIApplicationGroup '%{GLOBAL}' WSGIDaemonProcess internal display-name=internal group=watcher processes=2 threads=1 user=watcher WSGIProcessGroup internal WSGIScriptAlias / '\"/usr/bin/watcher-api-wsgi\"' '' '#' public vhost watcher-public.watcher-kuttl-default.svc configuration '' ServerName watcher-public.watcher-kuttl-default.svc '##' Vhost docroot DocumentRoot '\"/var/www/cgi-bin\"' '#' Set the timeout for the watcher-api TimeOut 80 '##' Directories, there should at least be a declaration for /var/www/cgi-bin '' Options -Indexes +FollowSymLinks +MultiViews AllowOverride None Require all granted '' '##' Logging ErrorLog /dev/stdout ServerSignature Off CustomLog /dev/stdout combined 'env=!forwarded' CustomLog /dev/stdout proxy env=forwarded '##' set watcher log level to debug LogLevel debug '##' WSGI configuration WSGIApplicationGroup '%{GLOBAL}' WSGIDaemonProcess public display-name=public group=watcher processes=2 threads=1 user=watcher WSGIProcessGroup public WSGIScriptAlias / '\"/usr/bin/watcher-api-wsgi\"' ''\n logger.go:42: 07:12:47 | watcher/4-deploy-with-precreated-account | + '[' 1 == 1 ']'\n logger.go:42: 07:12:49 | watcher/4-deploy-with-precreated-account | running command: [sh -c set -euxo pipefail\n oc project watcher-kuttl-default\n APIPOD=$(oc get pods -n watcher-kuttl-default -l \"service=watcher-api\" -ocustom-columns=:metadata.name|grep -v ^$|head -1)\n if [ -n \"${APIPOD}\" ]; then\n [ $(echo $(oc rsh -c watcher-api ${APIPOD} cat /etc/watcher/watcher.conf.d/01-global-custom.conf) |grep -c \"^# Global config\") == 1 ]\n [ $(echo $(oc rsh -c watcher-api ${APIPOD} cat /etc/watcher/watcher.conf.d/02-service-custom.conf) |grep -c \"^# Service config\") == 1 ]\n [ $(echo $(oc rsh -c watcher-api ${APIPOD} cat /etc/watcher/watcher.conf.d/00-default.conf) |grep -czPo '\\[prometheus_client\\]\\s+host\\s+=\\s+metric-storage-prometheus.watcher-kuttl-default.svc\\s+port\\s+=\\s+9090\\s+cafile\\s+=\\s+/etc/pki/ca-trust/extracted/pem/prometheus/internal-ca-bundle.pem') == 1 ]\n [ $(echo $(oc rsh -c watcher-api ${APIPOD} cat /etc/httpd/conf.d/10-watcher-wsgi-main.conf) |grep -czPo 'TimeOut 80') == 1 ]\n else\n exit 1\n fi\n ]\n \ logger.go:42: 07:12:49 | watcher/4-deploy-with-precreated-account | + oc project watcher-kuttl-default\n logger.go:42: 07:12:49 | watcher/4-deploy-with-precreated-account | Already on project \"watcher-kuttl-default\" on server \"https://api.crc.testing:6443\".\n \ logger.go:42: 07:12:49 | watcher/4-deploy-with-precreated-account | ++ oc get pods -n watcher-kuttl-default -l service=watcher-api -ocustom-columns=:metadata.name\n \ logger.go:42: 07:12:49 | watcher/4-deploy-with-precreated-account | ++ head -1\n logger.go:42: 07:12:49 | watcher/4-deploy-with-precreated-account | ++ grep -v '^$'\n logger.go:42: 07:12:49 | watcher/4-deploy-with-precreated-account | + APIPOD=watcher-kuttl-api-0\n logger.go:42: 07:12:49 | watcher/4-deploy-with-precreated-account | + '[' -n watcher-kuttl-api-0 ']'\n logger.go:42: 07:12:49 | watcher/4-deploy-with-precreated-account | ++ grep -c '^# Global config'\n logger.go:42: 07:12:49 | watcher/4-deploy-with-precreated-account | +++ oc rsh -c watcher-api watcher-kuttl-api-0 cat /etc/watcher/watcher.conf.d/01-global-custom.conf\n \ logger.go:42: 07:12:49 | watcher/4-deploy-with-precreated-account | ++ echo '#' Global config\n logger.go:42: 07:12:49 | watcher/4-deploy-with-precreated-account | + '[' 1 == 1 ']'\n logger.go:42: 07:12:49 | watcher/4-deploy-with-precreated-account | ++ grep -c '^# Service config'\n logger.go:42: 07:12:49 | watcher/4-deploy-with-precreated-account | +++ oc rsh -c watcher-api watcher-kuttl-api-0 cat /etc/watcher/watcher.conf.d/02-service-custom.conf\n \ logger.go:42: 07:12:49 | watcher/4-deploy-with-precreated-account | ++ echo '#' Service config\n logger.go:42: 07:12:49 | watcher/4-deploy-with-precreated-account | + '[' 1 == 1 ']'\n logger.go:42: 07:12:49 | watcher/4-deploy-with-precreated-account | ++ grep -czPo '\\[prometheus_client\\]\\s+host\\s+=\\s+metric-storage-prometheus.watcher-kuttl-default.svc\\s+port\\s+=\\s+9090\\s+cafile\\s+=\\s+/etc/pki/ca-trust/extracted/pem/prometheus/internal-ca-bundle.pem'\n \ logger.go:42: 07:12:49 | watcher/4-deploy-with-precreated-account | +++ oc rsh -c watcher-api watcher-kuttl-api-0 cat /etc/watcher/watcher.conf.d/00-default.conf\n \ logger.go:42: 07:12:50 | watcher/4-deploy-with-precreated-account | ++ echo '[DEFAULT]' state_path = /var/lib/watcher transport_url = 'rabbit://**********=1' control_exchange = watcher debug = True log_file = /var/log/watcher/watcher-kuttl-api.log '#' empty notification_level means that no notification will be sent notification_level = '[database]' connection = 'mysql+pymysql://watcher_test:5de8c0dffb799bdb4516f938bcffc35d@openstack.watcher-kuttl-default.svc/watcher?read_default_file=/etc/my.cnf' '[oslo_policy]' policy_file = /etc/watcher/policy.yaml.sample '[oslo_messaging_notifications]' driver = noop '[oslo_messaging_rabbit]' rabbit_quorum_queue=true rabbit_transient_quorum_queue=true amqp_durable_queues=true '[keystone_authtoken]' memcached_servers=memcached-0.memcached.watcher-kuttl-default.svc:11212 tls_enabled=true memcache_tls_certfile = /etc/pki/tls/certs/mtls.crt memcache_tls_keyfile = /etc/pki/tls/private/mtls.key memcache_tls_cafile = /etc/pki/tls/certs/mtls-ca.crt memcache_tls_enabled = true project_domain_name = Default project_name = service user_domain_name = Default password = pa**********rd username = watcher auth_url = https://keystone-internal.watcher-kuttl-default.svc:5000 interface = internal auth_type = password cafile = /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem '[watcher_clients_auth]' project_domain_name = Default project_name = service user_domain_name = Default password = pa**********rd username = watcher auth_url = https://keystone-internal.watcher-kuttl-default.svc:5000 interface = internal auth_type = password cafile = /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem '[oslo_concurrency]' lock_path = /var/lib/watcher/tmp '[watcher_datasources]' datasources = prometheus '[cache]' backend = oslo_cache.memcache_pool memcache_servers=memcached-0.memcached.watcher-kuttl-default.svc:11212 memcache_socket_timeout = 0.5 memcache_pool_connection_get_timeout = 1 enabled=true tls_enabled=true tls_certfile=/etc/pki/tls/certs/mtls.crt tls_keyfile=/etc/pki/tls/private/mtls.key tls_cafile=/etc/pki/tls/certs/mtls-ca.crt memcache_dead_retry = 30 '[prometheus_client]' host = metric-storage-prometheus.watcher-kuttl-default.svc port = 9090 cafile = /etc/pki/ca-trust/extracted/pem/prometheus/internal-ca-bundle.pem '[cinder_client]' endpoint_type = internal '[glance_client]' endpoint_type = internal '[ironic_client]' endpoint_type = internal '[keystone_client]' interface = internal '[neutron_client]' endpoint_type = internal '[nova_client]' endpoint_type = internal '[placement_client]' interface = internal '[watcher_cluster_data_model_collectors.compute]' period = 900 '[watcher_cluster_data_model_collectors.baremetal]' period = 900 '[watcher_cluster_data_model_collectors.storage]' period = 900\n logger.go:42: 07:12:50 | watcher/4-deploy-with-precreated-account | + '[' 1 == 1 ']'\n logger.go:42: 07:12:50 | watcher/4-deploy-with-precreated-account | ++ grep -czPo 'TimeOut 80'\n logger.go:42: 07:12:50 | watcher/4-deploy-with-precreated-account | +++ oc rsh -c watcher-api watcher-kuttl-api-0 cat /etc/httpd/conf.d/10-watcher-wsgi-main.conf\n \ logger.go:42: 07:12:50 | watcher/4-deploy-with-precreated-account | ++ echo '#' internal vhost watcher-internal.watcher-kuttl-default.svc configuration '' ServerName watcher-internal.watcher-kuttl-default.svc '##' Vhost docroot DocumentRoot '\"/var/www/cgi-bin\"' '#' Set the timeout for the watcher-api TimeOut 80 '##' Directories, there should at least be a declaration for /var/www/cgi-bin '' Options -Indexes +FollowSymLinks +MultiViews AllowOverride None Require all granted '' '##' Logging ErrorLog /dev/stdout ServerSignature Off CustomLog /dev/stdout combined 'env=!forwarded' CustomLog /dev/stdout proxy env=forwarded '##' set watcher log level to debug LogLevel debug '##' WSGI configuration WSGIApplicationGroup '%{GLOBAL}' WSGIDaemonProcess internal display-name=internal group=watcher processes=2 threads=1 user=watcher WSGIProcessGroup internal WSGIScriptAlias / '\"/usr/bin/watcher-api-wsgi\"' '' '#' public vhost watcher-public.watcher-kuttl-default.svc configuration '' ServerName watcher-public.watcher-kuttl-default.svc '##' Vhost docroot DocumentRoot '\"/var/www/cgi-bin\"' '#' Set the timeout for the watcher-api TimeOut 80 '##' Directories, there should at least be a declaration for /var/www/cgi-bin '' Options -Indexes +FollowSymLinks +MultiViews AllowOverride None Require all granted '' '##' Logging ErrorLog /dev/stdout ServerSignature Off CustomLog /dev/stdout combined 'env=!forwarded' CustomLog /dev/stdout proxy env=forwarded '##' set watcher log level to debug LogLevel debug '##' WSGI configuration WSGIApplicationGroup '%{GLOBAL}' WSGIDaemonProcess public display-name=public group=watcher processes=2 threads=1 user=watcher WSGIProcessGroup public WSGIScriptAlias / '\"/usr/bin/watcher-api-wsgi\"' ''\n logger.go:42: 07:12:50 | watcher/4-deploy-with-precreated-account | + '[' 1 == 1 ']'\n logger.go:42: 07:12:51 | watcher/4-deploy-with-precreated-account | running command: [sh -c set -euxo pipefail\n oc project watcher-kuttl-default\n APIPOD=$(oc get pods -n watcher-kuttl-default -l \"service=watcher-api\" -ocustom-columns=:metadata.name|grep -v ^$|head -1)\n if [ -n \"${APIPOD}\" ]; then\n [ $(echo $(oc rsh -c watcher-api ${APIPOD} cat /etc/watcher/watcher.conf.d/01-global-custom.conf) |grep -c \"^# Global config\") == 1 ]\n [ $(echo $(oc rsh -c watcher-api ${APIPOD} cat /etc/watcher/watcher.conf.d/02-service-custom.conf) |grep -c \"^# Service config\") == 1 ]\n [ $(echo $(oc rsh -c watcher-api ${APIPOD} cat /etc/watcher/watcher.conf.d/00-default.conf) |grep -czPo '\\[prometheus_client\\]\\s+host\\s+=\\s+metric-storage-prometheus.watcher-kuttl-default.svc\\s+port\\s+=\\s+9090\\s+cafile\\s+=\\s+/etc/pki/ca-trust/extracted/pem/prometheus/internal-ca-bundle.pem') == 1 ]\n [ $(echo $(oc rsh -c watcher-api ${APIPOD} cat /etc/httpd/conf.d/10-watcher-wsgi-main.conf) |grep -czPo 'TimeOut 80') == 1 ]\n else\n exit 1\n fi\n ]\n \ logger.go:42: 07:12:51 | watcher/4-deploy-with-precreated-account | + oc project watcher-kuttl-default\n logger.go:42: 07:12:51 | watcher/4-deploy-with-precreated-account | Already on project \"watcher-kuttl-default\" on server \"https://api.crc.testing:6443\".\n \ logger.go:42: 07:12:51 | watcher/4-deploy-with-precreated-account | ++ oc get pods -n watcher-kuttl-default -l service=watcher-api -ocustom-columns=:metadata.name\n \ logger.go:42: 07:12:51 | watcher/4-deploy-with-precreated-account | ++ head -1\n logger.go:42: 07:12:51 | watcher/4-deploy-with-precreated-account | ++ grep -v '^$'\n logger.go:42: 07:12:51 | watcher/4-deploy-with-precreated-account | + APIPOD=watcher-kuttl-api-0\n logger.go:42: 07:12:51 | watcher/4-deploy-with-precreated-account | + '[' -n watcher-kuttl-api-0 ']'\n logger.go:42: 07:12:51 | watcher/4-deploy-with-precreated-account | ++ grep -c '^# Global config'\n logger.go:42: 07:12:51 | watcher/4-deploy-with-precreated-account | +++ oc rsh -c watcher-api watcher-kuttl-api-0 cat /etc/watcher/watcher.conf.d/01-global-custom.conf\n \ logger.go:42: 07:12:52 | watcher/4-deploy-with-precreated-account | ++ echo '#' Global config\n logger.go:42: 07:12:52 | watcher/4-deploy-with-precreated-account | + '[' 1 == 1 ']'\n logger.go:42: 07:12:52 | watcher/4-deploy-with-precreated-account | +++ oc rsh -c watcher-api watcher-kuttl-api-0 cat /etc/watcher/watcher.conf.d/02-service-custom.conf\n \ logger.go:42: 07:12:52 | watcher/4-deploy-with-precreated-account | ++ grep -c '^# Service config'\n logger.go:42: 07:12:52 | watcher/4-deploy-with-precreated-account | ++ echo '#' Service config\n logger.go:42: 07:12:52 | watcher/4-deploy-with-precreated-account | + '[' 1 == 1 ']'\n logger.go:42: 07:12:52 | watcher/4-deploy-with-precreated-account | ++ grep -czPo '\\[prometheus_client\\]\\s+host\\s+=\\s+metric-storage-prometheus.watcher-kuttl-default.svc\\s+port\\s+=\\s+9090\\s+cafile\\s+=\\s+/etc/pki/ca-trust/extracted/pem/prometheus/internal-ca-bundle.pem'\n \ logger.go:42: 07:12:52 | watcher/4-deploy-with-precreated-account | +++ oc rsh -c watcher-api watcher-kuttl-api-0 cat /etc/watcher/watcher.conf.d/00-default.conf\n \ logger.go:42: 07:12:52 | watcher/4-deploy-with-precreated-account | ++ echo '[DEFAULT]' state_path = /var/lib/watcher transport_url = 'rabbit://**********=1' control_exchange = watcher debug = True log_file = /var/log/watcher/watcher-kuttl-api.log '#' empty notification_level means that no notification will be sent notification_level = '[database]' connection = 'mysql+pymysql://watcher_test:5de8c0dffb799bdb4516f938bcffc35d@openstack.watcher-kuttl-default.svc/watcher?read_default_file=/etc/my.cnf' '[oslo_policy]' policy_file = /etc/watcher/policy.yaml.sample '[oslo_messaging_notifications]' driver = noop '[oslo_messaging_rabbit]' rabbit_quorum_queue=true rabbit_transient_quorum_queue=true amqp_durable_queues=true '[keystone_authtoken]' memcached_servers=memcached-0.memcached.watcher-kuttl-default.svc:11212 tls_enabled=true memcache_tls_certfile = /etc/pki/tls/certs/mtls.crt memcache_tls_keyfile = /etc/pki/tls/private/mtls.key memcache_tls_cafile = /etc/pki/tls/certs/mtls-ca.crt memcache_tls_enabled = true project_domain_name = Default project_name = service user_domain_name = Default password = pa**********rd username = watcher auth_url = https://keystone-internal.watcher-kuttl-default.svc:5000 interface = internal auth_type = password cafile = /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem '[watcher_clients_auth]' project_domain_name = Default project_name = service user_domain_name = Default password = pa**********rd username = watcher auth_url = https://keystone-internal.watcher-kuttl-default.svc:5000 interface = internal auth_type = password cafile = /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem '[oslo_concurrency]' lock_path = /var/lib/watcher/tmp '[watcher_datasources]' datasources = prometheus '[cache]' backend = oslo_cache.memcache_pool memcache_servers=memcached-0.memcached.watcher-kuttl-default.svc:11212 memcache_socket_timeout = 0.5 memcache_pool_connection_get_timeout = 1 enabled=true tls_enabled=true tls_certfile=/etc/pki/tls/certs/mtls.crt tls_keyfile=/etc/pki/tls/private/mtls.key tls_cafile=/etc/pki/tls/certs/mtls-ca.crt memcache_dead_retry = 30 '[prometheus_client]' host = metric-storage-prometheus.watcher-kuttl-default.svc port = 9090 cafile = /etc/pki/ca-trust/extracted/pem/prometheus/internal-ca-bundle.pem '[cinder_client]' endpoint_type = internal '[glance_client]' endpoint_type = internal '[ironic_client]' endpoint_type = internal '[keystone_client]' interface = internal '[neutron_client]' endpoint_type = internal '[nova_client]' endpoint_type = internal '[placement_client]' interface = internal '[watcher_cluster_data_model_collectors.compute]' period = 900 '[watcher_cluster_data_model_collectors.baremetal]' period = 900 '[watcher_cluster_data_model_collectors.storage]' period = 900\n logger.go:42: 07:12:52 | watcher/4-deploy-with-precreated-account | + '[' 1 == 1 ']'\n logger.go:42: 07:12:52 | watcher/4-deploy-with-precreated-account | ++ grep -czPo 'TimeOut 80'\n logger.go:42: 07:12:52 | watcher/4-deploy-with-precreated-account | +++ oc rsh -c watcher-api watcher-kuttl-api-0 cat /etc/httpd/conf.d/10-watcher-wsgi-main.conf\n \ logger.go:42: 07:12:52 | watcher/4-deploy-with-precreated-account | ++ echo '#' internal vhost watcher-internal.watcher-kuttl-default.svc configuration '' ServerName watcher-internal.watcher-kuttl-default.svc '##' Vhost docroot DocumentRoot '\"/var/www/cgi-bin\"' '#' Set the timeout for the watcher-api TimeOut 80 '##' Directories, there should at least be a declaration for /var/www/cgi-bin '' Options -Indexes +FollowSymLinks +MultiViews AllowOverride None Require all granted '' '##' Logging ErrorLog /dev/stdout ServerSignature Off CustomLog /dev/stdout combined 'env=!forwarded' CustomLog /dev/stdout proxy env=forwarded '##' set watcher log level to debug LogLevel debug '##' WSGI configuration WSGIApplicationGroup '%{GLOBAL}' WSGIDaemonProcess internal display-name=internal group=watcher processes=2 threads=1 user=watcher WSGIProcessGroup internal WSGIScriptAlias / '\"/usr/bin/watcher-api-wsgi\"' '' '#' public vhost watcher-public.watcher-kuttl-default.svc configuration '' ServerName watcher-public.watcher-kuttl-default.svc '##' Vhost docroot DocumentRoot '\"/var/www/cgi-bin\"' '#' Set the timeout for the watcher-api TimeOut 80 '##' Directories, there should at least be a declaration for /var/www/cgi-bin '' Options -Indexes +FollowSymLinks +MultiViews AllowOverride None Require all granted '' '##' Logging ErrorLog /dev/stdout ServerSignature Off CustomLog /dev/stdout combined 'env=!forwarded' CustomLog /dev/stdout proxy env=forwarded '##' set watcher log level to debug LogLevel debug '##' WSGI configuration WSGIApplicationGroup '%{GLOBAL}' WSGIDaemonProcess public display-name=public group=watcher processes=2 threads=1 user=watcher WSGIProcessGroup public WSGIScriptAlias / '\"/usr/bin/watcher-api-wsgi\"' ''\n logger.go:42: 07:12:52 | watcher/4-deploy-with-precreated-account | + '[' 1 == 1 ']'\n logger.go:42: 07:12:53 | watcher/4-deploy-with-precreated-account | running command: [sh -c set -euxo pipefail\n oc project watcher-kuttl-default\n APIPOD=$(oc get pods -n watcher-kuttl-default -l \"service=watcher-api\" -ocustom-columns=:metadata.name|grep -v ^$|head -1)\n if [ -n \"${APIPOD}\" ]; then\n [ $(echo $(oc rsh -c watcher-api ${APIPOD} cat /etc/watcher/watcher.conf.d/01-global-custom.conf) |grep -c \"^# Global config\") == 1 ]\n [ $(echo $(oc rsh -c watcher-api ${APIPOD} cat /etc/watcher/watcher.conf.d/02-service-custom.conf) |grep -c \"^# Service config\") == 1 ]\n [ $(echo $(oc rsh -c watcher-api ${APIPOD} cat /etc/watcher/watcher.conf.d/00-default.conf) |grep -czPo '\\[prometheus_client\\]\\s+host\\s+=\\s+metric-storage-prometheus.watcher-kuttl-default.svc\\s+port\\s+=\\s+9090\\s+cafile\\s+=\\s+/etc/pki/ca-trust/extracted/pem/prometheus/internal-ca-bundle.pem') == 1 ]\n [ $(echo $(oc rsh -c watcher-api ${APIPOD} cat /etc/httpd/conf.d/10-watcher-wsgi-main.conf) |grep -czPo 'TimeOut 80') == 1 ]\n else\n exit 1\n fi\n ]\n \ logger.go:42: 07:12:53 | watcher/4-deploy-with-precreated-account | + oc project watcher-kuttl-default\n logger.go:42: 07:12:54 | watcher/4-deploy-with-precreated-account | Already on project \"watcher-kuttl-default\" on server \"https://api.crc.testing:6443\".\n \ logger.go:42: 07:12:54 | watcher/4-deploy-with-precreated-account | ++ oc get pods -n watcher-kuttl-default -l service=watcher-api -ocustom-columns=:metadata.name\n \ logger.go:42: 07:12:54 | watcher/4-deploy-with-precreated-account | ++ head -1\n logger.go:42: 07:12:54 | watcher/4-deploy-with-precreated-account | ++ grep -v '^$'\n logger.go:42: 07:12:54 | watcher/4-deploy-with-precreated-account | + APIPOD=watcher-kuttl-api-0\n logger.go:42: 07:12:54 | watcher/4-deploy-with-precreated-account | + '[' -n watcher-kuttl-api-0 ']'\n logger.go:42: 07:12:54 | watcher/4-deploy-with-precreated-account | ++ grep -c '^# Global config'\n logger.go:42: 07:12:54 | watcher/4-deploy-with-precreated-account | +++ oc rsh -c watcher-api watcher-kuttl-api-0 cat /etc/watcher/watcher.conf.d/01-global-custom.conf\n \ logger.go:42: 07:12:54 | watcher/4-deploy-with-precreated-account | ++ echo '#' Global config\n logger.go:42: 07:12:54 | watcher/4-deploy-with-precreated-account | + '[' 1 == 1 ']'\n logger.go:42: 07:12:54 | watcher/4-deploy-with-precreated-account | ++ grep -c '^# Service config'\n logger.go:42: 07:12:54 | watcher/4-deploy-with-precreated-account | +++ oc rsh -c watcher-api watcher-kuttl-api-0 cat /etc/watcher/watcher.conf.d/02-service-custom.conf\n \ logger.go:42: 07:12:54 | watcher/4-deploy-with-precreated-account | ++ echo '#' Service config\n logger.go:42: 07:12:54 | watcher/4-deploy-with-precreated-account | + '[' 1 == 1 ']'\n logger.go:42: 07:12:54 | watcher/4-deploy-with-precreated-account | ++ grep -czPo '\\[prometheus_client\\]\\s+host\\s+=\\s+metric-storage-prometheus.watcher-kuttl-default.svc\\s+port\\s+=\\s+9090\\s+cafile\\s+=\\s+/etc/pki/ca-trust/extracted/pem/prometheus/internal-ca-bundle.pem'\n \ logger.go:42: 07:12:54 | watcher/4-deploy-with-precreated-account | +++ oc rsh -c watcher-api watcher-kuttl-api-0 cat /etc/watcher/watcher.conf.d/00-default.conf\n \ logger.go:42: 07:12:54 | watcher/4-deploy-with-precreated-account | ++ echo '[DEFAULT]' state_path = /var/lib/watcher transport_url = 'rabbit://**********=1' control_exchange = watcher debug = True log_file = /var/log/watcher/watcher-kuttl-api.log '#' empty notification_level means that no notification will be sent notification_level = '[database]' connection = 'mysql+pymysql://watcher_test:5de8c0dffb799bdb4516f938bcffc35d@openstack.watcher-kuttl-default.svc/watcher?read_default_file=/etc/my.cnf' '[oslo_policy]' policy_file = /etc/watcher/policy.yaml.sample '[oslo_messaging_notifications]' driver = noop '[oslo_messaging_rabbit]' rabbit_quorum_queue=true rabbit_transient_quorum_queue=true amqp_durable_queues=true '[keystone_authtoken]' memcached_servers=memcached-0.memcached.watcher-kuttl-default.svc:11212 tls_enabled=true memcache_tls_certfile = /etc/pki/tls/certs/mtls.crt memcache_tls_keyfile = /etc/pki/tls/private/mtls.key memcache_tls_cafile = /etc/pki/tls/certs/mtls-ca.crt memcache_tls_enabled = true project_domain_name = Default project_name = service user_domain_name = Default password = pa**********rd username = watcher auth_url = https://keystone-internal.watcher-kuttl-default.svc:5000 interface = internal auth_type = password cafile = /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem '[watcher_clients_auth]' project_domain_name = Default project_name = service user_domain_name = Default password = pa**********rd username = watcher auth_url = https://keystone-internal.watcher-kuttl-default.svc:5000 interface = internal auth_type = password cafile = /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem '[oslo_concurrency]' lock_path = /var/lib/watcher/tmp '[watcher_datasources]' datasources = prometheus '[cache]' backend = oslo_cache.memcache_pool memcache_servers=memcached-0.memcached.watcher-kuttl-default.svc:11212 memcache_socket_timeout = 0.5 memcache_pool_connection_get_timeout = 1 enabled=true tls_enabled=true tls_certfile=/etc/pki/tls/certs/mtls.crt tls_keyfile=/etc/pki/tls/private/mtls.key tls_cafile=/etc/pki/tls/certs/mtls-ca.crt memcache_dead_retry = 30 '[prometheus_client]' host = metric-storage-prometheus.watcher-kuttl-default.svc port = 9090 cafile = /etc/pki/ca-trust/extracted/pem/prometheus/internal-ca-bundle.pem '[cinder_client]' endpoint_type = internal '[glance_client]' endpoint_type = internal '[ironic_client]' endpoint_type = internal '[keystone_client]' interface = internal '[neutron_client]' endpoint_type = internal '[nova_client]' endpoint_type = internal '[placement_client]' interface = internal '[watcher_cluster_data_model_collectors.compute]' period = 900 '[watcher_cluster_data_model_collectors.baremetal]' period = 900 '[watcher_cluster_data_model_collectors.storage]' period = 900\n logger.go:42: 07:12:54 | watcher/4-deploy-with-precreated-account | + '[' 1 == 1 ']'\n logger.go:42: 07:12:54 | watcher/4-deploy-with-precreated-account | ++ grep -czPo 'TimeOut 80'\n logger.go:42: 07:12:54 | watcher/4-deploy-with-precreated-account | +++ oc rsh -c watcher-api watcher-kuttl-api-0 cat /etc/httpd/conf.d/10-watcher-wsgi-main.conf\n \ logger.go:42: 07:12:55 | watcher/4-deploy-with-precreated-account | ++ echo '#' internal vhost watcher-internal.watcher-kuttl-default.svc configuration '' ServerName watcher-internal.watcher-kuttl-default.svc '##' Vhost docroot DocumentRoot '\"/var/www/cgi-bin\"' '#' Set the timeout for the watcher-api TimeOut 80 '##' Directories, there should at least be a declaration for /var/www/cgi-bin '' Options -Indexes +FollowSymLinks +MultiViews AllowOverride None Require all granted '' '##' Logging ErrorLog /dev/stdout ServerSignature Off CustomLog /dev/stdout combined 'env=!forwarded' CustomLog /dev/stdout proxy env=forwarded '##' set watcher log level to debug LogLevel debug '##' WSGI configuration WSGIApplicationGroup '%{GLOBAL}' WSGIDaemonProcess internal display-name=internal group=watcher processes=2 threads=1 user=watcher WSGIProcessGroup internal WSGIScriptAlias / '\"/usr/bin/watcher-api-wsgi\"' '' '#' public vhost watcher-public.watcher-kuttl-default.svc configuration '' ServerName watcher-public.watcher-kuttl-default.svc '##' Vhost docroot DocumentRoot '\"/var/www/cgi-bin\"' '#' Set the timeout for the watcher-api TimeOut 80 '##' Directories, there should at least be a declaration for /var/www/cgi-bin '' Options -Indexes +FollowSymLinks +MultiViews AllowOverride None Require all granted '' '##' Logging ErrorLog /dev/stdout ServerSignature Off CustomLog /dev/stdout combined 'env=!forwarded' CustomLog /dev/stdout proxy env=forwarded '##' set watcher log level to debug LogLevel debug '##' WSGI configuration WSGIApplicationGroup '%{GLOBAL}' WSGIDaemonProcess public display-name=public group=watcher processes=2 threads=1 user=watcher WSGIProcessGroup public WSGIScriptAlias / '\"/usr/bin/watcher-api-wsgi\"' ''\n logger.go:42: 07:12:55 | watcher/4-deploy-with-precreated-account | + '[' 1 == 1 ']'\n logger.go:42: 07:12:56 | watcher/4-deploy-with-precreated-account | running command: [sh -c set -euxo pipefail\n oc project watcher-kuttl-default\n APIPOD=$(oc get pods -n watcher-kuttl-default -l \"service=watcher-api\" -ocustom-columns=:metadata.name|grep -v ^$|head -1)\n if [ -n \"${APIPOD}\" ]; then\n [ $(echo $(oc rsh -c watcher-api ${APIPOD} cat /etc/watcher/watcher.conf.d/01-global-custom.conf) |grep -c \"^# Global config\") == 1 ]\n [ $(echo $(oc rsh -c watcher-api ${APIPOD} cat /etc/watcher/watcher.conf.d/02-service-custom.conf) |grep -c \"^# Service config\") == 1 ]\n [ $(echo $(oc rsh -c watcher-api ${APIPOD} cat /etc/watcher/watcher.conf.d/00-default.conf) |grep -czPo '\\[prometheus_client\\]\\s+host\\s+=\\s+metric-storage-prometheus.watcher-kuttl-default.svc\\s+port\\s+=\\s+9090\\s+cafile\\s+=\\s+/etc/pki/ca-trust/extracted/pem/prometheus/internal-ca-bundle.pem') == 1 ]\n [ $(echo $(oc rsh -c watcher-api ${APIPOD} cat /etc/httpd/conf.d/10-watcher-wsgi-main.conf) |grep -czPo 'TimeOut 80') == 1 ]\n else\n exit 1\n fi\n ]\n \ logger.go:42: 07:12:56 | watcher/4-deploy-with-precreated-account | + oc project watcher-kuttl-default\n logger.go:42: 07:12:56 | watcher/4-deploy-with-precreated-account | Already on project \"watcher-kuttl-default\" on server \"https://api.crc.testing:6443\".\n \ logger.go:42: 07:12:56 | watcher/4-deploy-with-precreated-account | ++ oc get pods -n watcher-kuttl-default -l service=watcher-api -ocustom-columns=:metadata.name\n \ logger.go:42: 07:12:56 | watcher/4-deploy-with-precreated-account | ++ head -1\n logger.go:42: 07:12:56 | watcher/4-deploy-with-precreated-account | ++ grep -v '^$'\n logger.go:42: 07:12:56 | watcher/4-deploy-with-precreated-account | + APIPOD=watcher-kuttl-api-0\n logger.go:42: 07:12:56 | watcher/4-deploy-with-precreated-account | + '[' -n watcher-kuttl-api-0 ']'\n logger.go:42: 07:12:56 | watcher/4-deploy-with-precreated-account | ++ grep -c '^# Global config'\n logger.go:42: 07:12:56 | watcher/4-deploy-with-precreated-account | +++ oc rsh -c watcher-api watcher-kuttl-api-0 cat /etc/watcher/watcher.conf.d/01-global-custom.conf\n \ logger.go:42: 07:12:56 | watcher/4-deploy-with-precreated-account | ++ echo '#' Global config\n logger.go:42: 07:12:56 | watcher/4-deploy-with-precreated-account | + '[' 1 == 1 ']'\n logger.go:42: 07:12:56 | watcher/4-deploy-with-precreated-account | ++ grep -c '^# Service config'\n logger.go:42: 07:12:56 | watcher/4-deploy-with-precreated-account | +++ oc rsh -c watcher-api watcher-kuttl-api-0 cat /etc/watcher/watcher.conf.d/02-service-custom.conf\n \ logger.go:42: 07:12:57 | watcher/4-deploy-with-precreated-account | ++ echo '#' Service config\n logger.go:42: 07:12:57 | watcher/4-deploy-with-precreated-account | + '[' 1 == 1 ']'\n logger.go:42: 07:12:57 | watcher/4-deploy-with-precreated-account | ++ grep -czPo '\\[prometheus_client\\]\\s+host\\s+=\\s+metric-storage-prometheus.watcher-kuttl-default.svc\\s+port\\s+=\\s+9090\\s+cafile\\s+=\\s+/etc/pki/ca-trust/extracted/pem/prometheus/internal-ca-bundle.pem'\n \ logger.go:42: 07:12:57 | watcher/4-deploy-with-precreated-account | +++ oc rsh -c watcher-api watcher-kuttl-api-0 cat /etc/watcher/watcher.conf.d/00-default.conf\n \ logger.go:42: 07:12:57 | watcher/4-deploy-with-precreated-account | ++ echo '[DEFAULT]' state_path = /var/lib/watcher transport_url = 'rabbit://**********=1' control_exchange = watcher debug = True log_file = /var/log/watcher/watcher-kuttl-api.log '#' empty notification_level means that no notification will be sent notification_level = '[database]' connection = 'mysql+pymysql://watcher_test:5de8c0dffb799bdb4516f938bcffc35d@openstack.watcher-kuttl-default.svc/watcher?read_default_file=/etc/my.cnf' '[oslo_policy]' policy_file = /etc/watcher/policy.yaml.sample '[oslo_messaging_notifications]' driver = noop '[oslo_messaging_rabbit]' rabbit_quorum_queue=true rabbit_transient_quorum_queue=true amqp_durable_queues=true '[keystone_authtoken]' memcached_servers=memcached-0.memcached.watcher-kuttl-default.svc:11212 tls_enabled=true memcache_tls_certfile = /etc/pki/tls/certs/mtls.crt memcache_tls_keyfile = /etc/pki/tls/private/mtls.key memcache_tls_cafile = /etc/pki/tls/certs/mtls-ca.crt memcache_tls_enabled = true project_domain_name = Default project_name = service user_domain_name = Default password = pa**********rd username = watcher auth_url = https://keystone-internal.watcher-kuttl-default.svc:5000 interface = internal auth_type = password cafile = /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem '[watcher_clients_auth]' project_domain_name = Default project_name = service user_domain_name = Default password = pa**********rd username = watcher auth_url = https://keystone-internal.watcher-kuttl-default.svc:5000 interface = internal auth_type = password cafile = /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem '[oslo_concurrency]' lock_path = /var/lib/watcher/tmp '[watcher_datasources]' datasources = prometheus '[cache]' backend = oslo_cache.memcache_pool memcache_servers=memcached-0.memcached.watcher-kuttl-default.svc:11212 memcache_socket_timeout = 0.5 memcache_pool_connection_get_timeout = 1 enabled=true tls_enabled=true tls_certfile=/etc/pki/tls/certs/mtls.crt tls_keyfile=/etc/pki/tls/private/mtls.key tls_cafile=/etc/pki/tls/certs/mtls-ca.crt memcache_dead_retry = 30 '[prometheus_client]' host = metric-storage-prometheus.watcher-kuttl-default.svc port = 9090 cafile = /etc/pki/ca-trust/extracted/pem/prometheus/internal-ca-bundle.pem '[cinder_client]' endpoint_type = internal '[glance_client]' endpoint_type = internal '[ironic_client]' endpoint_type = internal '[keystone_client]' interface = internal '[neutron_client]' endpoint_type = internal '[nova_client]' endpoint_type = internal '[placement_client]' interface = internal '[watcher_cluster_data_model_collectors.compute]' period = 900 '[watcher_cluster_data_model_collectors.baremetal]' period = 900 '[watcher_cluster_data_model_collectors.storage]' period = 900\n logger.go:42: 07:12:57 | watcher/4-deploy-with-precreated-account | + '[' 1 == 1 ']'\n logger.go:42: 07:12:57 | watcher/4-deploy-with-precreated-account | ++ grep -czPo 'TimeOut 80'\n logger.go:42: 07:12:57 | watcher/4-deploy-with-precreated-account | +++ oc rsh -c watcher-api watcher-kuttl-api-0 cat /etc/httpd/conf.d/10-watcher-wsgi-main.conf\n \ logger.go:42: 07:12:57 | watcher/4-deploy-with-precreated-account | ++ echo '#' internal vhost watcher-internal.watcher-kuttl-default.svc configuration '' ServerName watcher-internal.watcher-kuttl-default.svc '##' Vhost docroot DocumentRoot '\"/var/www/cgi-bin\"' '#' Set the timeout for the watcher-api TimeOut 80 '##' Directories, there should at least be a declaration for /var/www/cgi-bin '' Options -Indexes +FollowSymLinks +MultiViews AllowOverride None Require all granted '' '##' Logging ErrorLog /dev/stdout ServerSignature Off CustomLog /dev/stdout combined 'env=!forwarded' CustomLog /dev/stdout proxy env=forwarded '##' set watcher log level to debug LogLevel debug '##' WSGI configuration WSGIApplicationGroup '%{GLOBAL}' WSGIDaemonProcess internal display-name=internal group=watcher processes=2 threads=1 user=watcher WSGIProcessGroup internal WSGIScriptAlias / '\"/usr/bin/watcher-api-wsgi\"' '' '#' public vhost watcher-public.watcher-kuttl-default.svc configuration '' ServerName watcher-public.watcher-kuttl-default.svc '##' Vhost docroot DocumentRoot '\"/var/www/cgi-bin\"' '#' Set the timeout for the watcher-api TimeOut 80 '##' Directories, there should at least be a declaration for /var/www/cgi-bin '' Options -Indexes +FollowSymLinks +MultiViews AllowOverride None Require all granted '' '##' Logging ErrorLog /dev/stdout ServerSignature Off CustomLog /dev/stdout combined 'env=!forwarded' CustomLog /dev/stdout proxy env=forwarded '##' set watcher log level to debug LogLevel debug '##' WSGI configuration WSGIApplicationGroup '%{GLOBAL}' WSGIDaemonProcess public display-name=public group=watcher processes=2 threads=1 user=watcher WSGIProcessGroup public WSGIScriptAlias / '\"/usr/bin/watcher-api-wsgi\"' ''\n logger.go:42: 07:12:57 | watcher/4-deploy-with-precreated-account | + '[' 1 == 1 ']'\n logger.go:42: 07:12:58 | watcher/4-deploy-with-precreated-account | running command: [sh -c set -euxo pipefail\n oc project watcher-kuttl-default\n APIPOD=$(oc get pods -n watcher-kuttl-default -l \"service=watcher-api\" -ocustom-columns=:metadata.name|grep -v ^$|head -1)\n if [ -n \"${APIPOD}\" ]; then\n [ $(echo $(oc rsh -c watcher-api ${APIPOD} cat /etc/watcher/watcher.conf.d/01-global-custom.conf) |grep -c \"^# Global config\") == 1 ]\n [ $(echo $(oc rsh -c watcher-api ${APIPOD} cat /etc/watcher/watcher.conf.d/02-service-custom.conf) |grep -c \"^# Service config\") == 1 ]\n [ $(echo $(oc rsh -c watcher-api ${APIPOD} cat /etc/watcher/watcher.conf.d/00-default.conf) |grep -czPo '\\[prometheus_client\\]\\s+host\\s+=\\s+metric-storage-prometheus.watcher-kuttl-default.svc\\s+port\\s+=\\s+9090\\s+cafile\\s+=\\s+/etc/pki/ca-trust/extracted/pem/prometheus/internal-ca-bundle.pem') == 1 ]\n [ $(echo $(oc rsh -c watcher-api ${APIPOD} cat /etc/httpd/conf.d/10-watcher-wsgi-main.conf) |grep -czPo 'TimeOut 80') == 1 ]\n else\n exit 1\n fi\n ]\n \ logger.go:42: 07:12:58 | watcher/4-deploy-with-precreated-account | + oc project watcher-kuttl-default\n logger.go:42: 07:12:58 | watcher/4-deploy-with-precreated-account | Already on project \"watcher-kuttl-default\" on server \"https://api.crc.testing:6443\".\n \ logger.go:42: 07:12:58 | watcher/4-deploy-with-precreated-account | ++ oc get pods -n watcher-kuttl-default -l service=watcher-api -ocustom-columns=:metadata.name\n \ logger.go:42: 07:12:58 | watcher/4-deploy-with-precreated-account | ++ head -1\n logger.go:42: 07:12:58 | watcher/4-deploy-with-precreated-account | ++ grep -v '^$'\n logger.go:42: 07:12:58 | watcher/4-deploy-with-precreated-account | + APIPOD=watcher-kuttl-api-0\n logger.go:42: 07:12:58 | watcher/4-deploy-with-precreated-account | + '[' -n watcher-kuttl-api-0 ']'\n logger.go:42: 07:12:58 | watcher/4-deploy-with-precreated-account | ++ grep -c '^# Global config'\n logger.go:42: 07:12:58 | watcher/4-deploy-with-precreated-account | +++ oc rsh -c watcher-api watcher-kuttl-api-0 cat /etc/watcher/watcher.conf.d/01-global-custom.conf\n \ logger.go:42: 07:12:59 | watcher/4-deploy-with-precreated-account | ++ echo '#' Global config\n logger.go:42: 07:12:59 | watcher/4-deploy-with-precreated-account | + '[' 1 == 1 ']'\n logger.go:42: 07:12:59 | watcher/4-deploy-with-precreated-account | ++ grep -c '^# Service config'\n logger.go:42: 07:12:59 | watcher/4-deploy-with-precreated-account | +++ oc rsh -c watcher-api watcher-kuttl-api-0 cat /etc/watcher/watcher.conf.d/02-service-custom.conf\n \ logger.go:42: 07:12:59 | watcher/4-deploy-with-precreated-account | ++ echo '#' Service config\n logger.go:42: 07:12:59 | watcher/4-deploy-with-precreated-account | + '[' 1 == 1 ']'\n logger.go:42: 07:12:59 | watcher/4-deploy-with-precreated-account | ++ grep -czPo '\\[prometheus_client\\]\\s+host\\s+=\\s+metric-storage-prometheus.watcher-kuttl-default.svc\\s+port\\s+=\\s+9090\\s+cafile\\s+=\\s+/etc/pki/ca-trust/extracted/pem/prometheus/internal-ca-bundle.pem'\n \ logger.go:42: 07:12:59 | watcher/4-deploy-with-precreated-account | +++ oc rsh -c watcher-api watcher-kuttl-api-0 cat /etc/watcher/watcher.conf.d/00-default.conf\n \ logger.go:42: 07:12:59 | watcher/4-deploy-with-precreated-account | ++ echo '[DEFAULT]' state_path = /var/lib/watcher transport_url = 'rabbit://**********=1' control_exchange = watcher debug = True log_file = /var/log/watcher/watcher-kuttl-api.log '#' empty notification_level means that no notification will be sent notification_level = '[database]' connection = 'mysql+pymysql://watcher_test:5de8c0dffb799bdb4516f938bcffc35d@openstack.watcher-kuttl-default.svc/watcher?read_default_file=/etc/my.cnf' '[oslo_policy]' policy_file = /etc/watcher/policy.yaml.sample '[oslo_messaging_notifications]' driver = noop '[oslo_messaging_rabbit]' rabbit_quorum_queue=true rabbit_transient_quorum_queue=true amqp_durable_queues=true '[keystone_authtoken]' memcached_servers=memcached-0.memcached.watcher-kuttl-default.svc:11212 tls_enabled=true memcache_tls_certfile = /etc/pki/tls/certs/mtls.crt memcache_tls_keyfile = /etc/pki/tls/private/mtls.key memcache_tls_cafile = /etc/pki/tls/certs/mtls-ca.crt memcache_tls_enabled = true project_domain_name = Default project_name = service user_domain_name = Default password = pa**********rd username = watcher auth_url = https://keystone-internal.watcher-kuttl-default.svc:5000 interface = internal auth_type = password cafile = /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem '[watcher_clients_auth]' project_domain_name = Default project_name = service user_domain_name = Default password = pa**********rd username = watcher auth_url = https://keystone-internal.watcher-kuttl-default.svc:5000 interface = internal auth_type = password cafile = /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem '[oslo_concurrency]' lock_path = /var/lib/watcher/tmp '[watcher_datasources]' datasources = prometheus '[cache]' backend = oslo_cache.memcache_pool memcache_servers=memcached-0.memcached.watcher-kuttl-default.svc:11212 memcache_socket_timeout = 0.5 memcache_pool_connection_get_timeout = 1 enabled=true tls_enabled=true tls_certfile=/etc/pki/tls/certs/mtls.crt tls_keyfile=/etc/pki/tls/private/mtls.key tls_cafile=/etc/pki/tls/certs/mtls-ca.crt memcache_dead_retry = 30 '[prometheus_client]' host = metric-storage-prometheus.watcher-kuttl-default.svc port = 9090 cafile = /etc/pki/ca-trust/extracted/pem/prometheus/internal-ca-bundle.pem '[cinder_client]' endpoint_type = internal '[glance_client]' endpoint_type = internal '[ironic_client]' endpoint_type = internal '[keystone_client]' interface = internal '[neutron_client]' endpoint_type = internal '[nova_client]' endpoint_type = internal '[placement_client]' interface = internal '[watcher_cluster_data_model_collectors.compute]' period = 900 '[watcher_cluster_data_model_collectors.baremetal]' period = 900 '[watcher_cluster_data_model_collectors.storage]' period = 900\n logger.go:42: 07:12:59 | watcher/4-deploy-with-precreated-account | + '[' 1 == 1 ']'\n logger.go:42: 07:12:59 | watcher/4-deploy-with-precreated-account | ++ grep -czPo 'TimeOut 80'\n logger.go:42: 07:12:59 | watcher/4-deploy-with-precreated-account | +++ oc rsh -c watcher-api watcher-kuttl-api-0 cat /etc/httpd/conf.d/10-watcher-wsgi-main.conf\n \ logger.go:42: 07:12:59 | watcher/4-deploy-with-precreated-account | ++ echo '#' internal vhost watcher-internal.watcher-kuttl-default.svc configuration '' ServerName watcher-internal.watcher-kuttl-default.svc '##' Vhost docroot DocumentRoot '\"/var/www/cgi-bin\"' '#' Set the timeout for the watcher-api TimeOut 80 '##' Directories, there should at least be a declaration for /var/www/cgi-bin '' Options -Indexes +FollowSymLinks +MultiViews AllowOverride None Require all granted '' '##' Logging ErrorLog /dev/stdout ServerSignature Off CustomLog /dev/stdout combined 'env=!forwarded' CustomLog /dev/stdout proxy env=forwarded '##' set watcher log level to debug LogLevel debug '##' WSGI configuration WSGIApplicationGroup '%{GLOBAL}' WSGIDaemonProcess internal display-name=internal group=watcher processes=2 threads=1 user=watcher WSGIProcessGroup internal WSGIScriptAlias / '\"/usr/bin/watcher-api-wsgi\"' '' '#' public vhost watcher-public.watcher-kuttl-default.svc configuration '' ServerName watcher-public.watcher-kuttl-default.svc '##' Vhost docroot DocumentRoot '\"/var/www/cgi-bin\"' '#' Set the timeout for the watcher-api TimeOut 80 '##' Directories, there should at least be a declaration for /var/www/cgi-bin '' Options -Indexes +FollowSymLinks +MultiViews AllowOverride None Require all granted '' '##' Logging ErrorLog /dev/stdout ServerSignature Off CustomLog /dev/stdout combined 'env=!forwarded' CustomLog /dev/stdout proxy env=forwarded '##' set watcher log level to debug LogLevel debug '##' WSGI configuration WSGIApplicationGroup '%{GLOBAL}' WSGIDaemonProcess public display-name=public group=watcher processes=2 threads=1 user=watcher WSGIProcessGroup public WSGIScriptAlias / '\"/usr/bin/watcher-api-wsgi\"' ''\n logger.go:42: 07:12:59 | watcher/4-deploy-with-precreated-account | + '[' 1 == 1 ']'\n logger.go:42: 07:13:00 | watcher/4-deploy-with-precreated-account | running command: [sh -c set -euxo pipefail\n oc project watcher-kuttl-default\n APIPOD=$(oc get pods -n watcher-kuttl-default -l \"service=watcher-api\" -ocustom-columns=:metadata.name|grep -v ^$|head -1)\n if [ -n \"${APIPOD}\" ]; then\n [ $(echo $(oc rsh -c watcher-api ${APIPOD} cat /etc/watcher/watcher.conf.d/01-global-custom.conf) |grep -c \"^# Global config\") == 1 ]\n [ $(echo $(oc rsh -c watcher-api ${APIPOD} cat /etc/watcher/watcher.conf.d/02-service-custom.conf) |grep -c \"^# Service config\") == 1 ]\n [ $(echo $(oc rsh -c watcher-api ${APIPOD} cat /etc/watcher/watcher.conf.d/00-default.conf) |grep -czPo '\\[prometheus_client\\]\\s+host\\s+=\\s+metric-storage-prometheus.watcher-kuttl-default.svc\\s+port\\s+=\\s+9090\\s+cafile\\s+=\\s+/etc/pki/ca-trust/extracted/pem/prometheus/internal-ca-bundle.pem') == 1 ]\n [ $(echo $(oc rsh -c watcher-api ${APIPOD} cat /etc/httpd/conf.d/10-watcher-wsgi-main.conf) |grep -czPo 'TimeOut 80') == 1 ]\n else\n exit 1\n fi\n ]\n \ logger.go:42: 07:13:00 | watcher/4-deploy-with-precreated-account | + oc project watcher-kuttl-default\n logger.go:42: 07:13:01 | watcher/4-deploy-with-precreated-account | Already on project \"watcher-kuttl-default\" on server \"https://api.crc.testing:6443\".\n \ logger.go:42: 07:13:01 | watcher/4-deploy-with-precreated-account | ++ oc get pods -n watcher-kuttl-default -l service=watcher-api -ocustom-columns=:metadata.name\n \ logger.go:42: 07:13:01 | watcher/4-deploy-with-precreated-account | ++ head -1\n logger.go:42: 07:13:01 | watcher/4-deploy-with-precreated-account | ++ grep -v '^$'\n logger.go:42: 07:13:01 | watcher/4-deploy-with-precreated-account | + APIPOD=watcher-kuttl-api-0\n logger.go:42: 07:13:01 | watcher/4-deploy-with-precreated-account | + '[' -n watcher-kuttl-api-0 ']'\n logger.go:42: 07:13:01 | watcher/4-deploy-with-precreated-account | ++ grep -c '^# Global config'\n logger.go:42: 07:13:01 | watcher/4-deploy-with-precreated-account | +++ oc rsh -c watcher-api watcher-kuttl-api-0 cat /etc/watcher/watcher.conf.d/01-global-custom.conf\n \ logger.go:42: 07:13:01 | watcher/4-deploy-with-precreated-account | ++ echo '#' Global config\n logger.go:42: 07:13:01 | watcher/4-deploy-with-precreated-account | + '[' 1 == 1 ']'\n logger.go:42: 07:13:01 | watcher/4-deploy-with-precreated-account | ++ grep -c '^# Service config'\n logger.go:42: 07:13:01 | watcher/4-deploy-with-precreated-account | +++ oc rsh -c watcher-api watcher-kuttl-api-0 cat /etc/watcher/watcher.conf.d/02-service-custom.conf\n \ logger.go:42: 07:13:01 | watcher/4-deploy-with-precreated-account | ++ echo '#' Service config\n logger.go:42: 07:13:01 | watcher/4-deploy-with-precreated-account | + '[' 1 == 1 ']'\n logger.go:42: 07:13:01 | watcher/4-deploy-with-precreated-account | ++ grep -czPo '\\[prometheus_client\\]\\s+host\\s+=\\s+metric-storage-prometheus.watcher-kuttl-default.svc\\s+port\\s+=\\s+9090\\s+cafile\\s+=\\s+/etc/pki/ca-trust/extracted/pem/prometheus/internal-ca-bundle.pem'\n \ logger.go:42: 07:13:01 | watcher/4-deploy-with-precreated-account | +++ oc rsh -c watcher-api watcher-kuttl-api-0 cat /etc/watcher/watcher.conf.d/00-default.conf\n \ logger.go:42: 07:13:01 | watcher/4-deploy-with-precreated-account | ++ echo '[DEFAULT]' state_path = /var/lib/watcher transport_url = 'rabbit://**********=1' control_exchange = watcher debug = True log_file = /var/log/watcher/watcher-kuttl-api.log '#' empty notification_level means that no notification will be sent notification_level = '[database]' connection = 'mysql+pymysql://watcher_test:5de8c0dffb799bdb4516f938bcffc35d@openstack.watcher-kuttl-default.svc/watcher?read_default_file=/etc/my.cnf' '[oslo_policy]' policy_file = /etc/watcher/policy.yaml.sample '[oslo_messaging_notifications]' driver = noop '[oslo_messaging_rabbit]' rabbit_quorum_queue=true rabbit_transient_quorum_queue=true amqp_durable_queues=true '[keystone_authtoken]' memcached_servers=memcached-0.memcached.watcher-kuttl-default.svc:11212 tls_enabled=true memcache_tls_certfile = /etc/pki/tls/certs/mtls.crt memcache_tls_keyfile = /etc/pki/tls/private/mtls.key memcache_tls_cafile = /etc/pki/tls/certs/mtls-ca.crt memcache_tls_enabled = true project_domain_name = Default project_name = service user_domain_name = Default password = pa**********rd username = watcher auth_url = https://keystone-internal.watcher-kuttl-default.svc:5000 interface = internal auth_type = password cafile = /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem '[watcher_clients_auth]' project_domain_name = Default project_name = service user_domain_name = Default password = pa**********rd username = watcher auth_url = https://keystone-internal.watcher-kuttl-default.svc:5000 interface = internal auth_type = password cafile = /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem '[oslo_concurrency]' lock_path = /var/lib/watcher/tmp '[watcher_datasources]' datasources = prometheus '[cache]' backend = oslo_cache.memcache_pool memcache_servers=memcached-0.memcached.watcher-kuttl-default.svc:11212 memcache_socket_timeout = 0.5 memcache_pool_connection_get_timeout = 1 enabled=true tls_enabled=true tls_certfile=/etc/pki/tls/certs/mtls.crt tls_keyfile=/etc/pki/tls/private/mtls.key tls_cafile=/etc/pki/tls/certs/mtls-ca.crt memcache_dead_retry = 30 '[prometheus_client]' host = metric-storage-prometheus.watcher-kuttl-default.svc port = 9090 cafile = /etc/pki/ca-trust/extracted/pem/prometheus/internal-ca-bundle.pem '[cinder_client]' endpoint_type = internal '[glance_client]' endpoint_type = internal '[ironic_client]' endpoint_type = internal '[keystone_client]' interface = internal '[neutron_client]' endpoint_type = internal '[nova_client]' endpoint_type = internal '[placement_client]' interface = internal '[watcher_cluster_data_model_collectors.compute]' period = 900 '[watcher_cluster_data_model_collectors.baremetal]' period = 900 '[watcher_cluster_data_model_collectors.storage]' period = 900\n logger.go:42: 07:13:01 | watcher/4-deploy-with-precreated-account | + '[' 1 == 1 ']'\n logger.go:42: 07:13:01 | watcher/4-deploy-with-precreated-account | ++ grep -czPo 'TimeOut 80'\n logger.go:42: 07:13:01 | watcher/4-deploy-with-precreated-account | +++ oc rsh -c watcher-api watcher-kuttl-api-0 cat /etc/httpd/conf.d/10-watcher-wsgi-main.conf\n \ logger.go:42: 07:13:02 | watcher/4-deploy-with-precreated-account | ++ echo '#' internal vhost watcher-internal.watcher-kuttl-default.svc configuration '' ServerName watcher-internal.watcher-kuttl-default.svc '##' Vhost docroot DocumentRoot '\"/var/www/cgi-bin\"' '#' Set the timeout for the watcher-api TimeOut 80 '##' Directories, there should at least be a declaration for /var/www/cgi-bin '' Options -Indexes +FollowSymLinks +MultiViews AllowOverride None Require all granted '' '##' Logging ErrorLog /dev/stdout ServerSignature Off CustomLog /dev/stdout combined 'env=!forwarded' CustomLog /dev/stdout proxy env=forwarded '##' set watcher log level to debug LogLevel debug '##' WSGI configuration WSGIApplicationGroup '%{GLOBAL}' WSGIDaemonProcess internal display-name=internal group=watcher processes=2 threads=1 user=watcher WSGIProcessGroup internal WSGIScriptAlias / '\"/usr/bin/watcher-api-wsgi\"' '' '#' public vhost watcher-public.watcher-kuttl-default.svc configuration '' ServerName watcher-public.watcher-kuttl-default.svc '##' Vhost docroot DocumentRoot '\"/var/www/cgi-bin\"' '#' Set the timeout for the watcher-api TimeOut 80 '##' Directories, there should at least be a declaration for /var/www/cgi-bin '' Options -Indexes +FollowSymLinks +MultiViews AllowOverride None Require all granted '' '##' Logging ErrorLog /dev/stdout ServerSignature Off CustomLog /dev/stdout combined 'env=!forwarded' CustomLog /dev/stdout proxy env=forwarded '##' set watcher log level to debug LogLevel debug '##' WSGI configuration WSGIApplicationGroup '%{GLOBAL}' WSGIDaemonProcess public display-name=public group=watcher processes=2 threads=1 user=watcher WSGIProcessGroup public WSGIScriptAlias / '\"/usr/bin/watcher-api-wsgi\"' ''\n logger.go:42: 07:13:02 | watcher/4-deploy-with-precreated-account | + '[' 1 == 1 ']'\n logger.go:42: 07:13:03 | watcher/4-deploy-with-precreated-account | running command: [sh -c set -euxo pipefail\n oc project watcher-kuttl-default\n APIPOD=$(oc get pods -n watcher-kuttl-default -l \"service=watcher-api\" -ocustom-columns=:metadata.name|grep -v ^$|head -1)\n if [ -n \"${APIPOD}\" ]; then\n [ $(echo $(oc rsh -c watcher-api ${APIPOD} cat /etc/watcher/watcher.conf.d/01-global-custom.conf) |grep -c \"^# Global config\") == 1 ]\n [ $(echo $(oc rsh -c watcher-api ${APIPOD} cat /etc/watcher/watcher.conf.d/02-service-custom.conf) |grep -c \"^# Service config\") == 1 ]\n [ $(echo $(oc rsh -c watcher-api ${APIPOD} cat /etc/watcher/watcher.conf.d/00-default.conf) |grep -czPo '\\[prometheus_client\\]\\s+host\\s+=\\s+metric-storage-prometheus.watcher-kuttl-default.svc\\s+port\\s+=\\s+9090\\s+cafile\\s+=\\s+/etc/pki/ca-trust/extracted/pem/prometheus/internal-ca-bundle.pem') == 1 ]\n [ $(echo $(oc rsh -c watcher-api ${APIPOD} cat /etc/httpd/conf.d/10-watcher-wsgi-main.conf) |grep -czPo 'TimeOut 80') == 1 ]\n else\n exit 1\n fi\n ]\n \ logger.go:42: 07:13:03 | watcher/4-deploy-with-precreated-account | + oc project watcher-kuttl-default\n logger.go:42: 07:13:03 | watcher/4-deploy-with-precreated-account | Already on project \"watcher-kuttl-default\" on server \"https://api.crc.testing:6443\".\n \ logger.go:42: 07:13:03 | watcher/4-deploy-with-precreated-account | ++ oc get pods -n watcher-kuttl-default -l service=watcher-api -ocustom-columns=:metadata.name\n \ logger.go:42: 07:13:03 | watcher/4-deploy-with-precreated-account | ++ head -1\n logger.go:42: 07:13:03 | watcher/4-deploy-with-precreated-account | ++ grep -v '^$'\n logger.go:42: 07:13:03 | watcher/4-deploy-with-precreated-account | + APIPOD=watcher-kuttl-api-0\n logger.go:42: 07:13:03 | watcher/4-deploy-with-precreated-account | + '[' -n watcher-kuttl-api-0 ']'\n logger.go:42: 07:13:03 | watcher/4-deploy-with-precreated-account | ++ grep -c '^# Global config'\n logger.go:42: 07:13:03 | watcher/4-deploy-with-precreated-account | +++ oc rsh -c watcher-api watcher-kuttl-api-0 cat /etc/watcher/watcher.conf.d/01-global-custom.conf\n \ logger.go:42: 07:13:03 | watcher/4-deploy-with-precreated-account | ++ echo '#' Global config\n logger.go:42: 07:13:03 | watcher/4-deploy-with-precreated-account | + '[' 1 == 1 ']'\n logger.go:42: 07:13:03 | watcher/4-deploy-with-precreated-account | ++ grep -c '^# Service config'\n logger.go:42: 07:13:03 | watcher/4-deploy-with-precreated-account | +++ oc rsh -c watcher-api watcher-kuttl-api-0 cat /etc/watcher/watcher.conf.d/02-service-custom.conf\n \ logger.go:42: 07:13:04 | watcher/4-deploy-with-precreated-account | ++ echo '#' Service config\n logger.go:42: 07:13:04 | watcher/4-deploy-with-precreated-account | + '[' 1 == 1 ']'\n logger.go:42: 07:13:04 | watcher/4-deploy-with-precreated-account | ++ grep -czPo '\\[prometheus_client\\]\\s+host\\s+=\\s+metric-storage-prometheus.watcher-kuttl-default.svc\\s+port\\s+=\\s+9090\\s+cafile\\s+=\\s+/etc/pki/ca-trust/extracted/pem/prometheus/internal-ca-bundle.pem'\n \ logger.go:42: 07:13:04 | watcher/4-deploy-with-precreated-account | +++ oc rsh -c watcher-api watcher-kuttl-api-0 cat /etc/watcher/watcher.conf.d/00-default.conf\n \ logger.go:42: 07:13:04 | watcher/4-deploy-with-precreated-account | ++ echo '[DEFAULT]' state_path = /var/lib/watcher transport_url = 'rabbit://**********=1' control_exchange = watcher debug = True log_file = /var/log/watcher/watcher-kuttl-api.log '#' empty notification_level means that no notification will be sent notification_level = '[database]' connection = 'mysql+pymysql://watcher_test:5de8c0dffb799bdb4516f938bcffc35d@openstack.watcher-kuttl-default.svc/watcher?read_default_file=/etc/my.cnf' '[oslo_policy]' policy_file = /etc/watcher/policy.yaml.sample '[oslo_messaging_notifications]' driver = noop '[oslo_messaging_rabbit]' rabbit_quorum_queue=true rabbit_transient_quorum_queue=true amqp_durable_queues=true '[keystone_authtoken]' memcached_servers=memcached-0.memcached.watcher-kuttl-default.svc:11212 tls_enabled=true memcache_tls_certfile = /etc/pki/tls/certs/mtls.crt memcache_tls_keyfile = /etc/pki/tls/private/mtls.key memcache_tls_cafile = /etc/pki/tls/certs/mtls-ca.crt memcache_tls_enabled = true project_domain_name = Default project_name = service user_domain_name = Default password = pa**********rd username = watcher auth_url = https://keystone-internal.watcher-kuttl-default.svc:5000 interface = internal auth_type = password cafile = /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem '[watcher_clients_auth]' project_domain_name = Default project_name = service user_domain_name = Default password = pa**********rd username = watcher auth_url = https://keystone-internal.watcher-kuttl-default.svc:5000 interface = internal auth_type = password cafile = /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem '[oslo_concurrency]' lock_path = /var/lib/watcher/tmp '[watcher_datasources]' datasources = prometheus '[cache]' backend = oslo_cache.memcache_pool memcache_servers=memcached-0.memcached.watcher-kuttl-default.svc:11212 memcache_socket_timeout = 0.5 memcache_pool_connection_get_timeout = 1 enabled=true tls_enabled=true tls_certfile=/etc/pki/tls/certs/mtls.crt tls_keyfile=/etc/pki/tls/private/mtls.key tls_cafile=/etc/pki/tls/certs/mtls-ca.crt memcache_dead_retry = 30 '[prometheus_client]' host = metric-storage-prometheus.watcher-kuttl-default.svc port = 9090 cafile = /etc/pki/ca-trust/extracted/pem/prometheus/internal-ca-bundle.pem '[cinder_client]' endpoint_type = internal '[glance_client]' endpoint_type = internal '[ironic_client]' endpoint_type = internal '[keystone_client]' interface = internal '[neutron_client]' endpoint_type = internal '[nova_client]' endpoint_type = internal '[placement_client]' interface = internal '[watcher_cluster_data_model_collectors.compute]' period = 900 '[watcher_cluster_data_model_collectors.baremetal]' period = 900 '[watcher_cluster_data_model_collectors.storage]' period = 900\n logger.go:42: 07:13:04 | watcher/4-deploy-with-precreated-account | + '[' 1 == 1 ']'\n logger.go:42: 07:13:04 | watcher/4-deploy-with-precreated-account | ++ grep -czPo 'TimeOut 80'\n logger.go:42: 07:13:04 | watcher/4-deploy-with-precreated-account | +++ oc rsh -c watcher-api watcher-kuttl-api-0 cat /etc/httpd/conf.d/10-watcher-wsgi-main.conf\n \ logger.go:42: 07:13:04 | watcher/4-deploy-with-precreated-account | ++ echo '#' internal vhost watcher-internal.watcher-kuttl-default.svc configuration '' ServerName watcher-internal.watcher-kuttl-default.svc '##' Vhost docroot DocumentRoot '\"/var/www/cgi-bin\"' '#' Set the timeout for the watcher-api TimeOut 80 '##' Directories, there should at least be a declaration for /var/www/cgi-bin '' Options -Indexes +FollowSymLinks +MultiViews AllowOverride None Require all granted '' '##' Logging ErrorLog /dev/stdout ServerSignature Off CustomLog /dev/stdout combined 'env=!forwarded' CustomLog /dev/stdout proxy env=forwarded '##' set watcher log level to debug LogLevel debug '##' WSGI configuration WSGIApplicationGroup '%{GLOBAL}' WSGIDaemonProcess internal display-name=internal group=watcher processes=2 threads=1 user=watcher WSGIProcessGroup internal WSGIScriptAlias / '\"/usr/bin/watcher-api-wsgi\"' '' '#' public vhost watcher-public.watcher-kuttl-default.svc configuration '' ServerName watcher-public.watcher-kuttl-default.svc '##' Vhost docroot DocumentRoot '\"/var/www/cgi-bin\"' '#' Set the timeout for the watcher-api TimeOut 80 '##' Directories, there should at least be a declaration for /var/www/cgi-bin '' Options -Indexes +FollowSymLinks +MultiViews AllowOverride None Require all granted '' '##' Logging ErrorLog /dev/stdout ServerSignature Off CustomLog /dev/stdout combined 'env=!forwarded' CustomLog /dev/stdout proxy env=forwarded '##' set watcher log level to debug LogLevel debug '##' WSGI configuration WSGIApplicationGroup '%{GLOBAL}' WSGIDaemonProcess public display-name=public group=watcher processes=2 threads=1 user=watcher WSGIProcessGroup public WSGIScriptAlias / '\"/usr/bin/watcher-api-wsgi\"' ''\n logger.go:42: 07:13:04 | watcher/4-deploy-with-precreated-account | + '[' 1 == 1 ']'\n logger.go:42: 07:13:05 | watcher/4-deploy-with-precreated-account | running command: [sh -c set -euxo pipefail\n oc project watcher-kuttl-default\n APIPOD=$(oc get pods -n watcher-kuttl-default -l \"service=watcher-api\" -ocustom-columns=:metadata.name|grep -v ^$|head -1)\n if [ -n \"${APIPOD}\" ]; then\n [ $(echo $(oc rsh -c watcher-api ${APIPOD} cat /etc/watcher/watcher.conf.d/01-global-custom.conf) |grep -c \"^# Global config\") == 1 ]\n [ $(echo $(oc rsh -c watcher-api ${APIPOD} cat /etc/watcher/watcher.conf.d/02-service-custom.conf) |grep -c \"^# Service config\") == 1 ]\n [ $(echo $(oc rsh -c watcher-api ${APIPOD} cat /etc/watcher/watcher.conf.d/00-default.conf) |grep -czPo '\\[prometheus_client\\]\\s+host\\s+=\\s+metric-storage-prometheus.watcher-kuttl-default.svc\\s+port\\s+=\\s+9090\\s+cafile\\s+=\\s+/etc/pki/ca-trust/extracted/pem/prometheus/internal-ca-bundle.pem') == 1 ]\n [ $(echo $(oc rsh -c watcher-api ${APIPOD} cat /etc/httpd/conf.d/10-watcher-wsgi-main.conf) |grep -czPo 'TimeOut 80') == 1 ]\n else\n exit 1\n fi\n ]\n \ logger.go:42: 07:13:05 | watcher/4-deploy-with-precreated-account | + oc project watcher-kuttl-default\n logger.go:42: 07:13:05 | watcher/4-deploy-with-precreated-account | Already on project \"watcher-kuttl-default\" on server \"https://api.crc.testing:6443\".\n \ logger.go:42: 07:13:05 | watcher/4-deploy-with-precreated-account | ++ oc get pods -n watcher-kuttl-default -l service=watcher-api -ocustom-columns=:metadata.name\n \ logger.go:42: 07:13:05 | watcher/4-deploy-with-precreated-account | ++ head -1\n logger.go:42: 07:13:05 | watcher/4-deploy-with-precreated-account | ++ grep -v '^$'\n logger.go:42: 07:13:06 | watcher/4-deploy-with-precreated-account | + APIPOD=watcher-kuttl-api-0\n logger.go:42: 07:13:06 | watcher/4-deploy-with-precreated-account | + '[' -n watcher-kuttl-api-0 ']'\n logger.go:42: 07:13:06 | watcher/4-deploy-with-precreated-account | ++ grep -c '^# Global config'\n logger.go:42: 07:13:06 | watcher/4-deploy-with-precreated-account | +++ oc rsh -c watcher-api watcher-kuttl-api-0 cat /etc/watcher/watcher.conf.d/01-global-custom.conf\n \ logger.go:42: 07:13:06 | watcher/4-deploy-with-precreated-account | ++ echo '#' Global config\n logger.go:42: 07:13:06 | watcher/4-deploy-with-precreated-account | + '[' 1 == 1 ']'\n logger.go:42: 07:13:06 | watcher/4-deploy-with-precreated-account | ++ grep -c '^# Service config'\n logger.go:42: 07:13:06 | watcher/4-deploy-with-precreated-account | +++ oc rsh -c watcher-api watcher-kuttl-api-0 cat /etc/watcher/watcher.conf.d/02-service-custom.conf\n \ logger.go:42: 07:13:06 | watcher/4-deploy-with-precreated-account | ++ echo '#' Service config\n logger.go:42: 07:13:06 | watcher/4-deploy-with-precreated-account | + '[' 1 == 1 ']'\n logger.go:42: 07:13:06 | watcher/4-deploy-with-precreated-account | ++ grep -czPo '\\[prometheus_client\\]\\s+host\\s+=\\s+metric-storage-prometheus.watcher-kuttl-default.svc\\s+port\\s+=\\s+9090\\s+cafile\\s+=\\s+/etc/pki/ca-trust/extracted/pem/prometheus/internal-ca-bundle.pem'\n \ logger.go:42: 07:13:06 | watcher/4-deploy-with-precreated-account | +++ oc rsh -c watcher-api watcher-kuttl-api-0 cat /etc/watcher/watcher.conf.d/00-default.conf\n \ logger.go:42: 07:13:07 | watcher/4-deploy-with-precreated-account | ++ echo '[DEFAULT]' state_path = /var/lib/watcher transport_url = 'rabbit://**********=1' control_exchange = watcher debug = True log_file = /var/log/watcher/watcher-kuttl-api.log '#' empty notification_level means that no notification will be sent notification_level = '[database]' connection = 'mysql+pymysql://watcher_test:5de8c0dffb799bdb4516f938bcffc35d@openstack.watcher-kuttl-default.svc/watcher?read_default_file=/etc/my.cnf' '[oslo_policy]' policy_file = /etc/watcher/policy.yaml.sample '[oslo_messaging_notifications]' driver = noop '[oslo_messaging_rabbit]' rabbit_quorum_queue=true rabbit_transient_quorum_queue=true amqp_durable_queues=true '[keystone_authtoken]' memcached_servers=memcached-0.memcached.watcher-kuttl-default.svc:11212 tls_enabled=true memcache_tls_certfile = /etc/pki/tls/certs/mtls.crt memcache_tls_keyfile = /etc/pki/tls/private/mtls.key memcache_tls_cafile = /etc/pki/tls/certs/mtls-ca.crt memcache_tls_enabled = true project_domain_name = Default project_name = service user_domain_name = Default password = pa**********rd username = watcher auth_url = https://keystone-internal.watcher-kuttl-default.svc:5000 interface = internal auth_type = password cafile = /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem '[watcher_clients_auth]' project_domain_name = Default project_name = service user_domain_name = Default password = pa**********rd username = watcher auth_url = https://keystone-internal.watcher-kuttl-default.svc:5000 interface = internal auth_type = password cafile = /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem '[oslo_concurrency]' lock_path = /var/lib/watcher/tmp '[watcher_datasources]' datasources = prometheus '[cache]' backend = oslo_cache.memcache_pool memcache_servers=memcached-0.memcached.watcher-kuttl-default.svc:11212 memcache_socket_timeout = 0.5 memcache_pool_connection_get_timeout = 1 enabled=true tls_enabled=true tls_certfile=/etc/pki/tls/certs/mtls.crt tls_keyfile=/etc/pki/tls/private/mtls.key tls_cafile=/etc/pki/tls/certs/mtls-ca.crt memcache_dead_retry = 30 '[prometheus_client]' host = metric-storage-prometheus.watcher-kuttl-default.svc port = 9090 cafile = /etc/pki/ca-trust/extracted/pem/prometheus/internal-ca-bundle.pem '[cinder_client]' endpoint_type = internal '[glance_client]' endpoint_type = internal '[ironic_client]' endpoint_type = internal '[keystone_client]' interface = internal '[neutron_client]' endpoint_type = internal '[nova_client]' endpoint_type = internal '[placement_client]' interface = internal '[watcher_cluster_data_model_collectors.compute]' period = 900 '[watcher_cluster_data_model_collectors.baremetal]' period = 900 '[watcher_cluster_data_model_collectors.storage]' period = 900\n logger.go:42: 07:13:07 | watcher/4-deploy-with-precreated-account | + '[' 1 == 1 ']'\n logger.go:42: 07:13:07 | watcher/4-deploy-with-precreated-account | ++ grep -czPo 'TimeOut 80'\n logger.go:42: 07:13:07 | watcher/4-deploy-with-precreated-account | +++ oc rsh -c watcher-api watcher-kuttl-api-0 cat /etc/httpd/conf.d/10-watcher-wsgi-main.conf\n \ logger.go:42: 07:13:07 | watcher/4-deploy-with-precreated-account | ++ echo '#' internal vhost watcher-internal.watcher-kuttl-default.svc configuration '' ServerName watcher-internal.watcher-kuttl-default.svc '##' Vhost docroot DocumentRoot '\"/var/www/cgi-bin\"' '#' Set the timeout for the watcher-api TimeOut 80 '##' Directories, there should at least be a declaration for /var/www/cgi-bin '' Options -Indexes +FollowSymLinks +MultiViews AllowOverride None Require all granted '' '##' Logging ErrorLog /dev/stdout ServerSignature Off CustomLog /dev/stdout combined 'env=!forwarded' CustomLog /dev/stdout proxy env=forwarded '##' set watcher log level to debug LogLevel debug '##' WSGI configuration WSGIApplicationGroup '%{GLOBAL}' WSGIDaemonProcess internal display-name=internal group=watcher processes=2 threads=1 user=watcher WSGIProcessGroup internal WSGIScriptAlias / '\"/usr/bin/watcher-api-wsgi\"' '' '#' public vhost watcher-public.watcher-kuttl-default.svc configuration '' ServerName watcher-public.watcher-kuttl-default.svc '##' Vhost docroot DocumentRoot '\"/var/www/cgi-bin\"' '#' Set the timeout for the watcher-api TimeOut 80 '##' Directories, there should at least be a declaration for /var/www/cgi-bin '' Options -Indexes +FollowSymLinks +MultiViews AllowOverride None Require all granted '' '##' Logging ErrorLog /dev/stdout ServerSignature Off CustomLog /dev/stdout combined 'env=!forwarded' CustomLog /dev/stdout proxy env=forwarded '##' set watcher log level to debug LogLevel debug '##' WSGI configuration WSGIApplicationGroup '%{GLOBAL}' WSGIDaemonProcess public display-name=public group=watcher processes=2 threads=1 user=watcher WSGIProcessGroup public WSGIScriptAlias / '\"/usr/bin/watcher-api-wsgi\"' ''\n logger.go:42: 07:13:07 | watcher/4-deploy-with-precreated-account | + '[' 1 == 1 ']'\n logger.go:42: 07:13:08 | watcher/4-deploy-with-precreated-account | running command: [sh -c set -euxo pipefail\n oc project watcher-kuttl-default\n APIPOD=$(oc get pods -n watcher-kuttl-default -l \"service=watcher-api\" -ocustom-columns=:metadata.name|grep -v ^$|head -1)\n if [ -n \"${APIPOD}\" ]; then\n [ $(echo $(oc rsh -c watcher-api ${APIPOD} cat /etc/watcher/watcher.conf.d/01-global-custom.conf) |grep -c \"^# Global config\") == 1 ]\n [ $(echo $(oc rsh -c watcher-api ${APIPOD} cat /etc/watcher/watcher.conf.d/02-service-custom.conf) |grep -c \"^# Service config\") == 1 ]\n [ $(echo $(oc rsh -c watcher-api ${APIPOD} cat /etc/watcher/watcher.conf.d/00-default.conf) |grep -czPo '\\[prometheus_client\\]\\s+host\\s+=\\s+metric-storage-prometheus.watcher-kuttl-default.svc\\s+port\\s+=\\s+9090\\s+cafile\\s+=\\s+/etc/pki/ca-trust/extracted/pem/prometheus/internal-ca-bundle.pem') == 1 ]\n [ $(echo $(oc rsh -c watcher-api ${APIPOD} cat /etc/httpd/conf.d/10-watcher-wsgi-main.conf) |grep -czPo 'TimeOut 80') == 1 ]\n else\n exit 1\n fi\n ]\n \ logger.go:42: 07:13:08 | watcher/4-deploy-with-precreated-account | + oc project watcher-kuttl-default\n logger.go:42: 07:13:08 | watcher/4-deploy-with-precreated-account | Already on project \"watcher-kuttl-default\" on server \"https://api.crc.testing:6443\".\n \ logger.go:42: 07:13:08 | watcher/4-deploy-with-precreated-account | ++ oc get pods -n watcher-kuttl-default -l service=watcher-api -ocustom-columns=:metadata.name\n \ logger.go:42: 07:13:08 | watcher/4-deploy-with-precreated-account | ++ head -1\n logger.go:42: 07:13:08 | watcher/4-deploy-with-precreated-account | ++ grep -v '^$'\n logger.go:42: 07:13:08 | watcher/4-deploy-with-precreated-account | + APIPOD=watcher-kuttl-api-0\n logger.go:42: 07:13:08 | watcher/4-deploy-with-precreated-account | + '[' -n watcher-kuttl-api-0 ']'\n logger.go:42: 07:13:08 | watcher/4-deploy-with-precreated-account | ++ grep -c '^# Global config'\n logger.go:42: 07:13:08 | watcher/4-deploy-with-precreated-account | +++ oc rsh -c watcher-api watcher-kuttl-api-0 cat /etc/watcher/watcher.conf.d/01-global-custom.conf\n \ logger.go:42: 07:13:08 | watcher/4-deploy-with-precreated-account | ++ echo '#' Global config\n logger.go:42: 07:13:08 | watcher/4-deploy-with-precreated-account | + '[' 1 == 1 ']'\n logger.go:42: 07:13:08 | watcher/4-deploy-with-precreated-account | ++ grep -c '^# Service config'\n logger.go:42: 07:13:08 | watcher/4-deploy-with-precreated-account | +++ oc rsh -c watcher-api watcher-kuttl-api-0 cat /etc/watcher/watcher.conf.d/02-service-custom.conf\n \ logger.go:42: 07:13:09 | watcher/4-deploy-with-precreated-account | ++ echo '#' Service config\n logger.go:42: 07:13:09 | watcher/4-deploy-with-precreated-account | + '[' 1 == 1 ']'\n logger.go:42: 07:13:09 | watcher/4-deploy-with-precreated-account | ++ grep -czPo '\\[prometheus_client\\]\\s+host\\s+=\\s+metric-storage-prometheus.watcher-kuttl-default.svc\\s+port\\s+=\\s+9090\\s+cafile\\s+=\\s+/etc/pki/ca-trust/extracted/pem/prometheus/internal-ca-bundle.pem'\n \ logger.go:42: 07:13:09 | watcher/4-deploy-with-precreated-account | +++ oc rsh -c watcher-api watcher-kuttl-api-0 cat /etc/watcher/watcher.conf.d/00-default.conf\n \ logger.go:42: 07:13:09 | watcher/4-deploy-with-precreated-account | ++ echo '[DEFAULT]' state_path = /var/lib/watcher transport_url = 'rabbit://**********=1' control_exchange = watcher debug = True log_file = /var/log/watcher/watcher-kuttl-api.log '#' empty notification_level means that no notification will be sent notification_level = '[database]' connection = 'mysql+pymysql://watcher_test:5de8c0dffb799bdb4516f938bcffc35d@openstack.watcher-kuttl-default.svc/watcher?read_default_file=/etc/my.cnf' '[oslo_policy]' policy_file = /etc/watcher/policy.yaml.sample '[oslo_messaging_notifications]' driver = noop '[oslo_messaging_rabbit]' rabbit_quorum_queue=true rabbit_transient_quorum_queue=true amqp_durable_queues=true '[keystone_authtoken]' memcached_servers=memcached-0.memcached.watcher-kuttl-default.svc:11212 tls_enabled=true memcache_tls_certfile = /etc/pki/tls/certs/mtls.crt memcache_tls_keyfile = /etc/pki/tls/private/mtls.key memcache_tls_cafile = /etc/pki/tls/certs/mtls-ca.crt memcache_tls_enabled = true project_domain_name = Default project_name = service user_domain_name = Default password = pa**********rd username = watcher auth_url = https://keystone-internal.watcher-kuttl-default.svc:5000 interface = internal auth_type = password cafile = /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem '[watcher_clients_auth]' project_domain_name = Default project_name = service user_domain_name = Default password = pa**********rd username = watcher auth_url = https://keystone-internal.watcher-kuttl-default.svc:5000 interface = internal auth_type = password cafile = /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem '[oslo_concurrency]' lock_path = /var/lib/watcher/tmp '[watcher_datasources]' datasources = prometheus '[cache]' backend = oslo_cache.memcache_pool memcache_servers=memcached-0.memcached.watcher-kuttl-default.svc:11212 memcache_socket_timeout = 0.5 memcache_pool_connection_get_timeout = 1 enabled=true tls_enabled=true tls_certfile=/etc/pki/tls/certs/mtls.crt tls_keyfile=/etc/pki/tls/private/mtls.key tls_cafile=/etc/pki/tls/certs/mtls-ca.crt memcache_dead_retry = 30 '[prometheus_client]' host = metric-storage-prometheus.watcher-kuttl-default.svc port = 9090 cafile = /etc/pki/ca-trust/extracted/pem/prometheus/internal-ca-bundle.pem '[cinder_client]' endpoint_type = internal '[glance_client]' endpoint_type = internal '[ironic_client]' endpoint_type = internal '[keystone_client]' interface = internal '[neutron_client]' endpoint_type = internal '[nova_client]' endpoint_type = internal '[placement_client]' interface = internal '[watcher_cluster_data_model_collectors.compute]' period = 900 '[watcher_cluster_data_model_collectors.baremetal]' period = 900 '[watcher_cluster_data_model_collectors.storage]' period = 900\n logger.go:42: 07:13:09 | watcher/4-deploy-with-precreated-account | + '[' 1 == 1 ']'\n logger.go:42: 07:13:09 | watcher/4-deploy-with-precreated-account | ++ grep -czPo 'TimeOut 80'\n logger.go:42: 07:13:09 | watcher/4-deploy-with-precreated-account | +++ oc rsh -c watcher-api watcher-kuttl-api-0 cat /etc/httpd/conf.d/10-watcher-wsgi-main.conf\n \ logger.go:42: 07:13:09 | watcher/4-deploy-with-precreated-account | ++ echo '#' internal vhost watcher-internal.watcher-kuttl-default.svc configuration '' ServerName watcher-internal.watcher-kuttl-default.svc '##' Vhost docroot DocumentRoot '\"/var/www/cgi-bin\"' '#' Set the timeout for the watcher-api TimeOut 80 '##' Directories, there should at least be a declaration for /var/www/cgi-bin '' Options -Indexes +FollowSymLinks +MultiViews AllowOverride None Require all granted '' '##' Logging ErrorLog /dev/stdout ServerSignature Off CustomLog /dev/stdout combined 'env=!forwarded' CustomLog /dev/stdout proxy env=forwarded '##' set watcher log level to debug LogLevel debug '##' WSGI configuration WSGIApplicationGroup '%{GLOBAL}' WSGIDaemonProcess internal display-name=internal group=watcher processes=2 threads=1 user=watcher WSGIProcessGroup internal WSGIScriptAlias / '\"/usr/bin/watcher-api-wsgi\"' '' '#' public vhost watcher-public.watcher-kuttl-default.svc configuration '' ServerName watcher-public.watcher-kuttl-default.svc '##' Vhost docroot DocumentRoot '\"/var/www/cgi-bin\"' '#' Set the timeout for the watcher-api TimeOut 80 '##' Directories, there should at least be a declaration for /var/www/cgi-bin '' Options -Indexes +FollowSymLinks +MultiViews AllowOverride None Require all granted '' '##' Logging ErrorLog /dev/stdout ServerSignature Off CustomLog /dev/stdout combined 'env=!forwarded' CustomLog /dev/stdout proxy env=forwarded '##' set watcher log level to debug LogLevel debug '##' WSGI configuration WSGIApplicationGroup '%{GLOBAL}' WSGIDaemonProcess public display-name=public group=watcher processes=2 threads=1 user=watcher WSGIProcessGroup public WSGIScriptAlias / '\"/usr/bin/watcher-api-wsgi\"' ''\n logger.go:42: 07:13:09 | watcher/4-deploy-with-precreated-account | + '[' 1 == 1 ']'\n logger.go:42: 07:13:09 | watcher/4-deploy-with-precreated-account | test step completed 4-deploy-with-precreated-account\n \ logger.go:42: 07:13:09 | watcher/5-cleanup-watcher | starting test step 5-cleanup-watcher\n \ logger.go:42: 07:13:09 | watcher/5-cleanup-watcher | running command: [sh -c set -ex\n oc exec -n watcher-kuttl-default openstackclient -- openstack service list -f value -c Name -c Type | [ $(grep -c ^watcher) == 0 ]\n ]\n logger.go:42: 07:13:09 | watcher/5-cleanup-watcher | + oc exec -n watcher-kuttl-default openstackclient -- openstack service list -f value -c Name -c Type\n logger.go:42: 07:13:09 | watcher/5-cleanup-watcher | ++ grep -c '^watcher'\n logger.go:42: 07:13:13 | watcher/5-cleanup-watcher | + '[' 1 == 0 ']'\n logger.go:42: 07:13:14 | watcher/5-cleanup-watcher | running command: [sh -c set -ex\n oc exec -n watcher-kuttl-default openstackclient -- openstack service list -f value -c Name -c Type | [ $(grep -c ^watcher) == 0 ]\n ]\n logger.go:42: 07:13:14 | watcher/5-cleanup-watcher | + oc exec -n watcher-kuttl-default openstackclient -- openstack service list -f value -c Name -c Type\n logger.go:42: 07:13:14 | watcher/5-cleanup-watcher | ++ grep -c '^watcher'\n \ logger.go:42: 07:13:16 | watcher/5-cleanup-watcher | + '[' 0 == 0 ']'\n logger.go:42: 07:13:16 | watcher/5-cleanup-watcher | test step completed 5-cleanup-watcher\n logger.go:42: 07:13:16 | watcher | skipping kubernetes event logging\n=== CONT kuttl/harness/deps\n \ logger.go:42: 07:13:16 | deps | Ignoring infra.yaml as it does not match file name regexp: ^(\\d+)-(?:[^\\.]+)(?:\\.yaml)?$\n logger.go:42: 07:13:16 | deps | Ignoring keystone.yaml as it does not match file name regexp: ^(\\d+)-(?:[^\\.]+)(?:\\.yaml)?$\n \ logger.go:42: 07:13:16 | deps | Ignoring kustomization.yaml as it does not match file name regexp: ^(\\d+)-(?:[^\\.]+)(?:\\.yaml)?$\n logger.go:42: 07:13:16 | deps | Ignoring namespace.yaml as it does not match file name regexp: ^(\\d+)-(?:[^\\.]+)(?:\\.yaml)?$\n \ logger.go:42: 07:13:16 | deps | Ignoring telemetry.yaml as it does not match file name regexp: ^(\\d+)-(?:[^\\.]+)(?:\\.yaml)?$\n logger.go:42: 07:13:16 | deps | Skipping creation of user-supplied namespace: watcher-kuttl-default\n logger.go:42: 07:13:16 | deps | skipping kubernetes event logging\n=== NAME kuttl\n harness.go:406: run tests finished\n harness.go:514: cleaning up\n harness.go:571: removing temp folder: \"\"\n--- FAIL: kuttl (899.85s)\n --- FAIL: kuttl/harness (0.00s)\n \ --- PASS: kuttl/harness/common (0.01s)\n --- PASS: kuttl/harness/watcher-notification (76.18s)\n --- PASS: kuttl/harness/watcher-topology (33.98s)\n --- PASS: kuttl/harness/watcher-tls-certs-change (43.50s)\n --- PASS: kuttl/harness/watcher-tls (156.00s)\n --- FAIL: kuttl/harness/watcher-rmquser (300.20s)\n --- PASS: kuttl/harness/watcher-api-scaling (60.38s)\n --- PASS: kuttl/harness/watcher-cinder (135.83s)\n --- PASS: kuttl/harness/watcher (93.74s)\n --- PASS: kuttl/harness/deps (0.00s)\nFAIL" stdout_lines: - oc kuttl test --v 1 --start-kind=false --config test/kuttl/test-suites/default/config.yaml - === RUN kuttl - ' harness.go:463: starting setup' - ' harness.go:255: running tests using configured kubeconfig.' - ' harness.go:278: Successful connection to cluster at: https://api.crc.testing:6443' - ' harness.go:363: running tests' - ' harness.go:75: going to run test suite with timeout of 300 seconds for each step' - ' harness.go:375: testsuite: test/kuttl/test-suites/default/ has 10 tests' - === RUN kuttl/harness - === RUN kuttl/harness/common - === PAUSE kuttl/harness/common - === RUN kuttl/harness/deps - === PAUSE kuttl/harness/deps - === RUN kuttl/harness/watcher - === PAUSE kuttl/harness/watcher - === RUN kuttl/harness/watcher-api-scaling - === PAUSE kuttl/harness/watcher-api-scaling - === RUN kuttl/harness/watcher-cinder - === PAUSE kuttl/harness/watcher-cinder - === RUN kuttl/harness/watcher-notification - === PAUSE kuttl/harness/watcher-notification - === RUN kuttl/harness/watcher-rmquser - === PAUSE kuttl/harness/watcher-rmquser - === RUN kuttl/harness/watcher-tls - === PAUSE kuttl/harness/watcher-tls - === RUN kuttl/harness/watcher-tls-certs-change - === PAUSE kuttl/harness/watcher-tls-certs-change - === RUN kuttl/harness/watcher-topology - === PAUSE kuttl/harness/watcher-topology - === CONT kuttl/harness/common - ' logger.go:42: 06:58:17 | common | Ignoring cleanup-assert.yaml as it does not match file name regexp: ^(\d+)-(?:[^\.]+)(?:\.yaml)?$' - ' logger.go:42: 06:58:17 | common | Ignoring cleanup-errors.yaml as it does not match file name regexp: ^(\d+)-(?:[^\.]+)(?:\.yaml)?$' - ' logger.go:42: 06:58:17 | common | Ignoring cleanup-watcher.yaml as it does not match file name regexp: ^(\d+)-(?:[^\.]+)(?:\.yaml)?$' - ' logger.go:42: 06:58:17 | common | Ignoring deploy-with-defaults.yaml as it does not match file name regexp: ^(\d+)-(?:[^\.]+)(?:\.yaml)?$' - ' logger.go:42: 06:58:17 | common | Skipping creation of user-supplied namespace: watcher-kuttl-default' - ' logger.go:42: 06:58:17 | common | skipping kubernetes event logging' - === CONT kuttl/harness/watcher-notification - ' logger.go:42: 06:58:17 | watcher-notification | Skipping creation of user-supplied namespace: watcher-kuttl-default' - ' logger.go:42: 06:58:17 | watcher-notification/0-cleanup-watcher | starting test step 0-cleanup-watcher' - ' logger.go:42: 06:58:17 | watcher-notification/0-cleanup-watcher | test step completed 0-cleanup-watcher' - ' logger.go:42: 06:58:17 | watcher-notification/1-deploy-with-notification | starting test step 1-deploy-with-notification' - ' logger.go:42: 06:58:17 | watcher-notification/1-deploy-with-notification | Watcher:watcher-kuttl-default/watcher-kuttl created' - ' logger.go:42: 06:58:17 | watcher-notification/1-deploy-with-notification | running command: [sh -c set -euxo pipefail' - ' oc project watcher-kuttl-default' - ' APIPOD=$(oc get pods -n watcher-kuttl-default -l "service=watcher-api" -ocustom-columns=:metadata.name|grep -v ^$|head -1)' - ' if [ -n "${APIPOD}" ]; then' - ' [ $(echo $(oc rsh -c watcher-api ${APIPOD} cat /etc/watcher/watcher.conf.d/00-default.conf) |grep -czPo ''\[oslo_messaging_notifications\]\s+driver\s+=\s+messagingv2\s+transport_url\s+='') == 1 ]' - ' else' - ' exit 1' - ' fi' - ' ]' - ' logger.go:42: 06:58:17 | watcher-notification/1-deploy-with-notification | + oc project watcher-kuttl-default' - ' logger.go:42: 06:58:17 | watcher-notification/1-deploy-with-notification | Now using project "watcher-kuttl-default" on server "https://api.crc.testing:6443".' - ' logger.go:42: 06:58:17 | watcher-notification/1-deploy-with-notification | ++ grep -v ''^$''' - ' logger.go:42: 06:58:17 | watcher-notification/1-deploy-with-notification | ++ head -1' - ' logger.go:42: 06:58:17 | watcher-notification/1-deploy-with-notification | ++ oc get pods -n watcher-kuttl-default -l service=watcher-api -ocustom-columns=:metadata.name' - ' logger.go:42: 06:58:17 | watcher-notification/1-deploy-with-notification | + APIPOD=' - ' logger.go:42: 06:58:18 | watcher-notification/1-deploy-with-notification | running command: [sh -c set -euxo pipefail' - ' oc project watcher-kuttl-default' - ' APIPOD=$(oc get pods -n watcher-kuttl-default -l "service=watcher-api" -ocustom-columns=:metadata.name|grep -v ^$|head -1)' - ' if [ -n "${APIPOD}" ]; then' - ' [ $(echo $(oc rsh -c watcher-api ${APIPOD} cat /etc/watcher/watcher.conf.d/00-default.conf) |grep -czPo ''\[oslo_messaging_notifications\]\s+driver\s+=\s+messagingv2\s+transport_url\s+='') == 1 ]' - ' else' - ' exit 1' - ' fi' - ' ]' - ' logger.go:42: 06:58:18 | watcher-notification/1-deploy-with-notification | + oc project watcher-kuttl-default' - ' logger.go:42: 06:58:19 | watcher-notification/1-deploy-with-notification | Already on project "watcher-kuttl-default" on server "https://api.crc.testing:6443".' - ' logger.go:42: 06:58:19 | watcher-notification/1-deploy-with-notification | ++ grep -v ''^$''' - ' logger.go:42: 06:58:19 | watcher-notification/1-deploy-with-notification | ++ head -1' - ' logger.go:42: 06:58:19 | watcher-notification/1-deploy-with-notification | ++ oc get pods -n watcher-kuttl-default -l service=watcher-api -ocustom-columns=:metadata.name' - ' logger.go:42: 06:58:19 | watcher-notification/1-deploy-with-notification | + APIPOD=' - ' logger.go:42: 06:58:20 | watcher-notification/1-deploy-with-notification | running command: [sh -c set -euxo pipefail' - ' oc project watcher-kuttl-default' - ' APIPOD=$(oc get pods -n watcher-kuttl-default -l "service=watcher-api" -ocustom-columns=:metadata.name|grep -v ^$|head -1)' - ' if [ -n "${APIPOD}" ]; then' - ' [ $(echo $(oc rsh -c watcher-api ${APIPOD} cat /etc/watcher/watcher.conf.d/00-default.conf) |grep -czPo ''\[oslo_messaging_notifications\]\s+driver\s+=\s+messagingv2\s+transport_url\s+='') == 1 ]' - ' else' - ' exit 1' - ' fi' - ' ]' - ' logger.go:42: 06:58:20 | watcher-notification/1-deploy-with-notification | + oc project watcher-kuttl-default' - ' logger.go:42: 06:58:20 | watcher-notification/1-deploy-with-notification | Already on project "watcher-kuttl-default" on server "https://api.crc.testing:6443".' - ' logger.go:42: 06:58:20 | watcher-notification/1-deploy-with-notification | ++ grep -v ''^$''' - ' logger.go:42: 06:58:20 | watcher-notification/1-deploy-with-notification | ++ head -1' - ' logger.go:42: 06:58:20 | watcher-notification/1-deploy-with-notification | ++ oc get pods -n watcher-kuttl-default -l service=watcher-api -ocustom-columns=:metadata.name' - ' logger.go:42: 06:58:20 | watcher-notification/1-deploy-with-notification | + APIPOD=' - ' logger.go:42: 06:58:21 | watcher-notification/1-deploy-with-notification | running command: [sh -c set -euxo pipefail' - ' oc project watcher-kuttl-default' - ' APIPOD=$(oc get pods -n watcher-kuttl-default -l "service=watcher-api" -ocustom-columns=:metadata.name|grep -v ^$|head -1)' - ' if [ -n "${APIPOD}" ]; then' - ' [ $(echo $(oc rsh -c watcher-api ${APIPOD} cat /etc/watcher/watcher.conf.d/00-default.conf) |grep -czPo ''\[oslo_messaging_notifications\]\s+driver\s+=\s+messagingv2\s+transport_url\s+='') == 1 ]' - ' else' - ' exit 1' - ' fi' - ' ]' - ' logger.go:42: 06:58:21 | watcher-notification/1-deploy-with-notification | + oc project watcher-kuttl-default' - ' logger.go:42: 06:58:21 | watcher-notification/1-deploy-with-notification | Already on project "watcher-kuttl-default" on server "https://api.crc.testing:6443".' - ' logger.go:42: 06:58:21 | watcher-notification/1-deploy-with-notification | ++ grep -v ''^$''' - ' logger.go:42: 06:58:21 | watcher-notification/1-deploy-with-notification | ++ oc get pods -n watcher-kuttl-default -l service=watcher-api -ocustom-columns=:metadata.name' - ' logger.go:42: 06:58:21 | watcher-notification/1-deploy-with-notification | ++ head -1' - ' logger.go:42: 06:58:22 | watcher-notification/1-deploy-with-notification | + APIPOD=' - ' logger.go:42: 06:58:23 | watcher-notification/1-deploy-with-notification | running command: [sh -c set -euxo pipefail' - ' oc project watcher-kuttl-default' - ' APIPOD=$(oc get pods -n watcher-kuttl-default -l "service=watcher-api" -ocustom-columns=:metadata.name|grep -v ^$|head -1)' - ' if [ -n "${APIPOD}" ]; then' - ' [ $(echo $(oc rsh -c watcher-api ${APIPOD} cat /etc/watcher/watcher.conf.d/00-default.conf) |grep -czPo ''\[oslo_messaging_notifications\]\s+driver\s+=\s+messagingv2\s+transport_url\s+='') == 1 ]' - ' else' - ' exit 1' - ' fi' - ' ]' - ' logger.go:42: 06:58:23 | watcher-notification/1-deploy-with-notification | + oc project watcher-kuttl-default' - ' logger.go:42: 06:58:23 | watcher-notification/1-deploy-with-notification | Already on project "watcher-kuttl-default" on server "https://api.crc.testing:6443".' - ' logger.go:42: 06:58:23 | watcher-notification/1-deploy-with-notification | ++ grep -v ''^$''' - ' logger.go:42: 06:58:23 | watcher-notification/1-deploy-with-notification | ++ head -1' - ' logger.go:42: 06:58:23 | watcher-notification/1-deploy-with-notification | ++ oc get pods -n watcher-kuttl-default -l service=watcher-api -ocustom-columns=:metadata.name' - ' logger.go:42: 06:58:23 | watcher-notification/1-deploy-with-notification | + APIPOD=' - ' logger.go:42: 06:58:24 | watcher-notification/1-deploy-with-notification | running command: [sh -c set -euxo pipefail' - ' oc project watcher-kuttl-default' - ' APIPOD=$(oc get pods -n watcher-kuttl-default -l "service=watcher-api" -ocustom-columns=:metadata.name|grep -v ^$|head -1)' - ' if [ -n "${APIPOD}" ]; then' - ' [ $(echo $(oc rsh -c watcher-api ${APIPOD} cat /etc/watcher/watcher.conf.d/00-default.conf) |grep -czPo ''\[oslo_messaging_notifications\]\s+driver\s+=\s+messagingv2\s+transport_url\s+='') == 1 ]' - ' else' - ' exit 1' - ' fi' - ' ]' - ' logger.go:42: 06:58:24 | watcher-notification/1-deploy-with-notification | + oc project watcher-kuttl-default' - ' logger.go:42: 06:58:24 | watcher-notification/1-deploy-with-notification | Already on project "watcher-kuttl-default" on server "https://api.crc.testing:6443".' - ' logger.go:42: 06:58:24 | watcher-notification/1-deploy-with-notification | ++ oc get pods -n watcher-kuttl-default -l service=watcher-api -ocustom-columns=:metadata.name' - ' logger.go:42: 06:58:24 | watcher-notification/1-deploy-with-notification | ++ head -1' - ' logger.go:42: 06:58:24 | watcher-notification/1-deploy-with-notification | ++ grep -v ''^$''' - ' logger.go:42: 06:58:25 | watcher-notification/1-deploy-with-notification | + APIPOD=' - ' logger.go:42: 06:58:26 | watcher-notification/1-deploy-with-notification | running command: [sh -c set -euxo pipefail' - ' oc project watcher-kuttl-default' - ' APIPOD=$(oc get pods -n watcher-kuttl-default -l "service=watcher-api" -ocustom-columns=:metadata.name|grep -v ^$|head -1)' - ' if [ -n "${APIPOD}" ]; then' - ' [ $(echo $(oc rsh -c watcher-api ${APIPOD} cat /etc/watcher/watcher.conf.d/00-default.conf) |grep -czPo ''\[oslo_messaging_notifications\]\s+driver\s+=\s+messagingv2\s+transport_url\s+='') == 1 ]' - ' else' - ' exit 1' - ' fi' - ' ]' - ' logger.go:42: 06:58:26 | watcher-notification/1-deploy-with-notification | + oc project watcher-kuttl-default' - ' logger.go:42: 06:58:26 | watcher-notification/1-deploy-with-notification | Already on project "watcher-kuttl-default" on server "https://api.crc.testing:6443".' - ' logger.go:42: 06:58:26 | watcher-notification/1-deploy-with-notification | ++ oc get pods -n watcher-kuttl-default -l service=watcher-api -ocustom-columns=:metadata.name' - ' logger.go:42: 06:58:26 | watcher-notification/1-deploy-with-notification | ++ head -1' - ' logger.go:42: 06:58:26 | watcher-notification/1-deploy-with-notification | ++ grep -v ''^$''' - ' logger.go:42: 06:58:26 | watcher-notification/1-deploy-with-notification | + APIPOD=' - ' logger.go:42: 06:58:27 | watcher-notification/1-deploy-with-notification | running command: [sh -c set -euxo pipefail' - ' oc project watcher-kuttl-default' - ' APIPOD=$(oc get pods -n watcher-kuttl-default -l "service=watcher-api" -ocustom-columns=:metadata.name|grep -v ^$|head -1)' - ' if [ -n "${APIPOD}" ]; then' - ' [ $(echo $(oc rsh -c watcher-api ${APIPOD} cat /etc/watcher/watcher.conf.d/00-default.conf) |grep -czPo ''\[oslo_messaging_notifications\]\s+driver\s+=\s+messagingv2\s+transport_url\s+='') == 1 ]' - ' else' - ' exit 1' - ' fi' - ' ]' - ' logger.go:42: 06:58:27 | watcher-notification/1-deploy-with-notification | + oc project watcher-kuttl-default' - ' logger.go:42: 06:58:27 | watcher-notification/1-deploy-with-notification | Already on project "watcher-kuttl-default" on server "https://api.crc.testing:6443".' - ' logger.go:42: 06:58:27 | watcher-notification/1-deploy-with-notification | ++ oc get pods -n watcher-kuttl-default -l service=watcher-api -ocustom-columns=:metadata.name' - ' logger.go:42: 06:58:27 | watcher-notification/1-deploy-with-notification | ++ head -1' - ' logger.go:42: 06:58:27 | watcher-notification/1-deploy-with-notification | ++ grep -v ''^$''' - ' logger.go:42: 06:58:27 | watcher-notification/1-deploy-with-notification | + APIPOD=' - ' logger.go:42: 06:58:29 | watcher-notification/1-deploy-with-notification | running command: [sh -c set -euxo pipefail' - ' oc project watcher-kuttl-default' - ' APIPOD=$(oc get pods -n watcher-kuttl-default -l "service=watcher-api" -ocustom-columns=:metadata.name|grep -v ^$|head -1)' - ' if [ -n "${APIPOD}" ]; then' - ' [ $(echo $(oc rsh -c watcher-api ${APIPOD} cat /etc/watcher/watcher.conf.d/00-default.conf) |grep -czPo ''\[oslo_messaging_notifications\]\s+driver\s+=\s+messagingv2\s+transport_url\s+='') == 1 ]' - ' else' - ' exit 1' - ' fi' - ' ]' - ' logger.go:42: 06:58:29 | watcher-notification/1-deploy-with-notification | + oc project watcher-kuttl-default' - ' logger.go:42: 06:58:29 | watcher-notification/1-deploy-with-notification | Already on project "watcher-kuttl-default" on server "https://api.crc.testing:6443".' - ' logger.go:42: 06:58:29 | watcher-notification/1-deploy-with-notification | ++ oc get pods -n watcher-kuttl-default -l service=watcher-api -ocustom-columns=:metadata.name' - ' logger.go:42: 06:58:29 | watcher-notification/1-deploy-with-notification | ++ head -1' - ' logger.go:42: 06:58:29 | watcher-notification/1-deploy-with-notification | ++ grep -v ''^$''' - ' logger.go:42: 06:58:29 | watcher-notification/1-deploy-with-notification | + APIPOD=' - ' logger.go:42: 06:58:30 | watcher-notification/1-deploy-with-notification | running command: [sh -c set -euxo pipefail' - ' oc project watcher-kuttl-default' - ' APIPOD=$(oc get pods -n watcher-kuttl-default -l "service=watcher-api" -ocustom-columns=:metadata.name|grep -v ^$|head -1)' - ' if [ -n "${APIPOD}" ]; then' - ' [ $(echo $(oc rsh -c watcher-api ${APIPOD} cat /etc/watcher/watcher.conf.d/00-default.conf) |grep -czPo ''\[oslo_messaging_notifications\]\s+driver\s+=\s+messagingv2\s+transport_url\s+='') == 1 ]' - ' else' - ' exit 1' - ' fi' - ' ]' - ' logger.go:42: 06:58:30 | watcher-notification/1-deploy-with-notification | + oc project watcher-kuttl-default' - ' logger.go:42: 06:58:30 | watcher-notification/1-deploy-with-notification | Already on project "watcher-kuttl-default" on server "https://api.crc.testing:6443".' - ' logger.go:42: 06:58:30 | watcher-notification/1-deploy-with-notification | ++ oc get pods -n watcher-kuttl-default -l service=watcher-api -ocustom-columns=:metadata.name' - ' logger.go:42: 06:58:30 | watcher-notification/1-deploy-with-notification | ++ head -1' - ' logger.go:42: 06:58:30 | watcher-notification/1-deploy-with-notification | ++ grep -v ''^$''' - ' logger.go:42: 06:58:30 | watcher-notification/1-deploy-with-notification | + APIPOD=' - ' logger.go:42: 06:58:31 | watcher-notification/1-deploy-with-notification | running command: [sh -c set -euxo pipefail' - ' oc project watcher-kuttl-default' - ' APIPOD=$(oc get pods -n watcher-kuttl-default -l "service=watcher-api" -ocustom-columns=:metadata.name|grep -v ^$|head -1)' - ' if [ -n "${APIPOD}" ]; then' - ' [ $(echo $(oc rsh -c watcher-api ${APIPOD} cat /etc/watcher/watcher.conf.d/00-default.conf) |grep -czPo ''\[oslo_messaging_notifications\]\s+driver\s+=\s+messagingv2\s+transport_url\s+='') == 1 ]' - ' else' - ' exit 1' - ' fi' - ' ]' - ' logger.go:42: 06:58:31 | watcher-notification/1-deploy-with-notification | + oc project watcher-kuttl-default' - ' logger.go:42: 06:58:31 | watcher-notification/1-deploy-with-notification | Already on project "watcher-kuttl-default" on server "https://api.crc.testing:6443".' - ' logger.go:42: 06:58:31 | watcher-notification/1-deploy-with-notification | ++ oc get pods -n watcher-kuttl-default -l service=watcher-api -ocustom-columns=:metadata.name' - ' logger.go:42: 06:58:31 | watcher-notification/1-deploy-with-notification | ++ head -1' - ' logger.go:42: 06:58:31 | watcher-notification/1-deploy-with-notification | ++ grep -v ''^$''' - ' logger.go:42: 06:58:32 | watcher-notification/1-deploy-with-notification | + APIPOD=' - ' logger.go:42: 06:58:33 | watcher-notification/1-deploy-with-notification | running command: [sh -c set -euxo pipefail' - ' oc project watcher-kuttl-default' - ' APIPOD=$(oc get pods -n watcher-kuttl-default -l "service=watcher-api" -ocustom-columns=:metadata.name|grep -v ^$|head -1)' - ' if [ -n "${APIPOD}" ]; then' - ' [ $(echo $(oc rsh -c watcher-api ${APIPOD} cat /etc/watcher/watcher.conf.d/00-default.conf) |grep -czPo ''\[oslo_messaging_notifications\]\s+driver\s+=\s+messagingv2\s+transport_url\s+='') == 1 ]' - ' else' - ' exit 1' - ' fi' - ' ]' - ' logger.go:42: 06:58:33 | watcher-notification/1-deploy-with-notification | + oc project watcher-kuttl-default' - ' logger.go:42: 06:58:33 | watcher-notification/1-deploy-with-notification | Already on project "watcher-kuttl-default" on server "https://api.crc.testing:6443".' - ' logger.go:42: 06:58:33 | watcher-notification/1-deploy-with-notification | ++ oc get pods -n watcher-kuttl-default -l service=watcher-api -ocustom-columns=:metadata.name' - ' logger.go:42: 06:58:33 | watcher-notification/1-deploy-with-notification | ++ head -1' - ' logger.go:42: 06:58:33 | watcher-notification/1-deploy-with-notification | ++ grep -v ''^$''' - ' logger.go:42: 06:58:33 | watcher-notification/1-deploy-with-notification | + APIPOD=' - ' logger.go:42: 06:58:34 | watcher-notification/1-deploy-with-notification | running command: [sh -c set -euxo pipefail' - ' oc project watcher-kuttl-default' - ' APIPOD=$(oc get pods -n watcher-kuttl-default -l "service=watcher-api" -ocustom-columns=:metadata.name|grep -v ^$|head -1)' - ' if [ -n "${APIPOD}" ]; then' - ' [ $(echo $(oc rsh -c watcher-api ${APIPOD} cat /etc/watcher/watcher.conf.d/00-default.conf) |grep -czPo ''\[oslo_messaging_notifications\]\s+driver\s+=\s+messagingv2\s+transport_url\s+='') == 1 ]' - ' else' - ' exit 1' - ' fi' - ' ]' - ' logger.go:42: 06:58:34 | watcher-notification/1-deploy-with-notification | + oc project watcher-kuttl-default' - ' logger.go:42: 06:58:34 | watcher-notification/1-deploy-with-notification | Already on project "watcher-kuttl-default" on server "https://api.crc.testing:6443".' - ' logger.go:42: 06:58:34 | watcher-notification/1-deploy-with-notification | ++ oc get pods -n watcher-kuttl-default -l service=watcher-api -ocustom-columns=:metadata.name' - ' logger.go:42: 06:58:34 | watcher-notification/1-deploy-with-notification | ++ head -1' - ' logger.go:42: 06:58:34 | watcher-notification/1-deploy-with-notification | ++ grep -v ''^$''' - ' logger.go:42: 06:58:34 | watcher-notification/1-deploy-with-notification | + APIPOD=' - ' logger.go:42: 06:58:36 | watcher-notification/1-deploy-with-notification | running command: [sh -c set -euxo pipefail' - ' oc project watcher-kuttl-default' - ' APIPOD=$(oc get pods -n watcher-kuttl-default -l "service=watcher-api" -ocustom-columns=:metadata.name|grep -v ^$|head -1)' - ' if [ -n "${APIPOD}" ]; then' - ' [ $(echo $(oc rsh -c watcher-api ${APIPOD} cat /etc/watcher/watcher.conf.d/00-default.conf) |grep -czPo ''\[oslo_messaging_notifications\]\s+driver\s+=\s+messagingv2\s+transport_url\s+='') == 1 ]' - ' else' - ' exit 1' - ' fi' - ' ]' - ' logger.go:42: 06:58:36 | watcher-notification/1-deploy-with-notification | + oc project watcher-kuttl-default' - ' logger.go:42: 06:58:36 | watcher-notification/1-deploy-with-notification | Already on project "watcher-kuttl-default" on server "https://api.crc.testing:6443".' - ' logger.go:42: 06:58:36 | watcher-notification/1-deploy-with-notification | ++ oc get pods -n watcher-kuttl-default -l service=watcher-api -ocustom-columns=:metadata.name' - ' logger.go:42: 06:58:36 | watcher-notification/1-deploy-with-notification | ++ head -1' - ' logger.go:42: 06:58:36 | watcher-notification/1-deploy-with-notification | ++ grep -v ''^$''' - ' logger.go:42: 06:58:36 | watcher-notification/1-deploy-with-notification | + APIPOD=' - ' logger.go:42: 06:58:37 | watcher-notification/1-deploy-with-notification | running command: [sh -c set -euxo pipefail' - ' oc project watcher-kuttl-default' - ' APIPOD=$(oc get pods -n watcher-kuttl-default -l "service=watcher-api" -ocustom-columns=:metadata.name|grep -v ^$|head -1)' - ' if [ -n "${APIPOD}" ]; then' - ' [ $(echo $(oc rsh -c watcher-api ${APIPOD} cat /etc/watcher/watcher.conf.d/00-default.conf) |grep -czPo ''\[oslo_messaging_notifications\]\s+driver\s+=\s+messagingv2\s+transport_url\s+='') == 1 ]' - ' else' - ' exit 1' - ' fi' - ' ]' - ' logger.go:42: 06:58:37 | watcher-notification/1-deploy-with-notification | + oc project watcher-kuttl-default' - ' logger.go:42: 06:58:37 | watcher-notification/1-deploy-with-notification | Already on project "watcher-kuttl-default" on server "https://api.crc.testing:6443".' - ' logger.go:42: 06:58:37 | watcher-notification/1-deploy-with-notification | ++ oc get pods -n watcher-kuttl-default -l service=watcher-api -ocustom-columns=:metadata.name' - ' logger.go:42: 06:58:37 | watcher-notification/1-deploy-with-notification | ++ head -1' - ' logger.go:42: 06:58:37 | watcher-notification/1-deploy-with-notification | ++ grep -v ''^$''' - ' logger.go:42: 06:58:37 | watcher-notification/1-deploy-with-notification | + APIPOD=' - ' logger.go:42: 06:58:38 | watcher-notification/1-deploy-with-notification | running command: [sh -c set -euxo pipefail' - ' oc project watcher-kuttl-default' - ' APIPOD=$(oc get pods -n watcher-kuttl-default -l "service=watcher-api" -ocustom-columns=:metadata.name|grep -v ^$|head -1)' - ' if [ -n "${APIPOD}" ]; then' - ' [ $(echo $(oc rsh -c watcher-api ${APIPOD} cat /etc/watcher/watcher.conf.d/00-default.conf) |grep -czPo ''\[oslo_messaging_notifications\]\s+driver\s+=\s+messagingv2\s+transport_url\s+='') == 1 ]' - ' else' - ' exit 1' - ' fi' - ' ]' - ' logger.go:42: 06:58:38 | watcher-notification/1-deploy-with-notification | + oc project watcher-kuttl-default' - ' logger.go:42: 06:58:39 | watcher-notification/1-deploy-with-notification | Already on project "watcher-kuttl-default" on server "https://api.crc.testing:6443".' - ' logger.go:42: 06:58:39 | watcher-notification/1-deploy-with-notification | ++ oc get pods -n watcher-kuttl-default -l service=watcher-api -ocustom-columns=:metadata.name' - ' logger.go:42: 06:58:39 | watcher-notification/1-deploy-with-notification | ++ head -1' - ' logger.go:42: 06:58:39 | watcher-notification/1-deploy-with-notification | ++ grep -v ''^$''' - ' logger.go:42: 06:58:39 | watcher-notification/1-deploy-with-notification | + APIPOD=' - ' logger.go:42: 06:58:40 | watcher-notification/1-deploy-with-notification | running command: [sh -c set -euxo pipefail' - ' oc project watcher-kuttl-default' - ' APIPOD=$(oc get pods -n watcher-kuttl-default -l "service=watcher-api" -ocustom-columns=:metadata.name|grep -v ^$|head -1)' - ' if [ -n "${APIPOD}" ]; then' - ' [ $(echo $(oc rsh -c watcher-api ${APIPOD} cat /etc/watcher/watcher.conf.d/00-default.conf) |grep -czPo ''\[oslo_messaging_notifications\]\s+driver\s+=\s+messagingv2\s+transport_url\s+='') == 1 ]' - ' else' - ' exit 1' - ' fi' - ' ]' - ' logger.go:42: 06:58:40 | watcher-notification/1-deploy-with-notification | + oc project watcher-kuttl-default' - ' logger.go:42: 06:58:40 | watcher-notification/1-deploy-with-notification | Already on project "watcher-kuttl-default" on server "https://api.crc.testing:6443".' - ' logger.go:42: 06:58:40 | watcher-notification/1-deploy-with-notification | ++ oc get pods -n watcher-kuttl-default -l service=watcher-api -ocustom-columns=:metadata.name' - ' logger.go:42: 06:58:40 | watcher-notification/1-deploy-with-notification | ++ head -1' - ' logger.go:42: 06:58:40 | watcher-notification/1-deploy-with-notification | ++ grep -v ''^$''' - ' logger.go:42: 06:58:40 | watcher-notification/1-deploy-with-notification | + APIPOD=' - ' logger.go:42: 06:58:41 | watcher-notification/1-deploy-with-notification | running command: [sh -c set -euxo pipefail' - ' oc project watcher-kuttl-default' - ' APIPOD=$(oc get pods -n watcher-kuttl-default -l "service=watcher-api" -ocustom-columns=:metadata.name|grep -v ^$|head -1)' - ' if [ -n "${APIPOD}" ]; then' - ' [ $(echo $(oc rsh -c watcher-api ${APIPOD} cat /etc/watcher/watcher.conf.d/00-default.conf) |grep -czPo ''\[oslo_messaging_notifications\]\s+driver\s+=\s+messagingv2\s+transport_url\s+='') == 1 ]' - ' else' - ' exit 1' - ' fi' - ' ]' - ' logger.go:42: 06:58:41 | watcher-notification/1-deploy-with-notification | + oc project watcher-kuttl-default' - ' logger.go:42: 06:58:42 | watcher-notification/1-deploy-with-notification | Already on project "watcher-kuttl-default" on server "https://api.crc.testing:6443".' - ' logger.go:42: 06:58:42 | watcher-notification/1-deploy-with-notification | ++ oc get pods -n watcher-kuttl-default -l service=watcher-api -ocustom-columns=:metadata.name' - ' logger.go:42: 06:58:42 | watcher-notification/1-deploy-with-notification | ++ head -1' - ' logger.go:42: 06:58:42 | watcher-notification/1-deploy-with-notification | ++ grep -v ''^$''' - ' logger.go:42: 06:58:42 | watcher-notification/1-deploy-with-notification | + APIPOD=' - ' logger.go:42: 06:58:43 | watcher-notification/1-deploy-with-notification | running command: [sh -c set -euxo pipefail' - ' oc project watcher-kuttl-default' - ' APIPOD=$(oc get pods -n watcher-kuttl-default -l "service=watcher-api" -ocustom-columns=:metadata.name|grep -v ^$|head -1)' - ' if [ -n "${APIPOD}" ]; then' - ' [ $(echo $(oc rsh -c watcher-api ${APIPOD} cat /etc/watcher/watcher.conf.d/00-default.conf) |grep -czPo ''\[oslo_messaging_notifications\]\s+driver\s+=\s+messagingv2\s+transport_url\s+='') == 1 ]' - ' else' - ' exit 1' - ' fi' - ' ]' - ' logger.go:42: 06:58:43 | watcher-notification/1-deploy-with-notification | + oc project watcher-kuttl-default' - ' logger.go:42: 06:58:43 | watcher-notification/1-deploy-with-notification | Already on project "watcher-kuttl-default" on server "https://api.crc.testing:6443".' - ' logger.go:42: 06:58:43 | watcher-notification/1-deploy-with-notification | ++ grep -v ''^$''' - ' logger.go:42: 06:58:43 | watcher-notification/1-deploy-with-notification | ++ head -1' - ' logger.go:42: 06:58:43 | watcher-notification/1-deploy-with-notification | ++ oc get pods -n watcher-kuttl-default -l service=watcher-api -ocustom-columns=:metadata.name' - ' logger.go:42: 06:58:43 | watcher-notification/1-deploy-with-notification | + APIPOD=' - ' logger.go:42: 06:58:44 | watcher-notification/1-deploy-with-notification | running command: [sh -c set -euxo pipefail' - ' oc project watcher-kuttl-default' - ' APIPOD=$(oc get pods -n watcher-kuttl-default -l "service=watcher-api" -ocustom-columns=:metadata.name|grep -v ^$|head -1)' - ' if [ -n "${APIPOD}" ]; then' - ' [ $(echo $(oc rsh -c watcher-api ${APIPOD} cat /etc/watcher/watcher.conf.d/00-default.conf) |grep -czPo ''\[oslo_messaging_notifications\]\s+driver\s+=\s+messagingv2\s+transport_url\s+='') == 1 ]' - ' else' - ' exit 1' - ' fi' - ' ]' - ' logger.go:42: 06:58:44 | watcher-notification/1-deploy-with-notification | + oc project watcher-kuttl-default' - ' logger.go:42: 06:58:44 | watcher-notification/1-deploy-with-notification | Already on project "watcher-kuttl-default" on server "https://api.crc.testing:6443".' - ' logger.go:42: 06:58:44 | watcher-notification/1-deploy-with-notification | ++ oc get pods -n watcher-kuttl-default -l service=watcher-api -ocustom-columns=:metadata.name' - ' logger.go:42: 06:58:44 | watcher-notification/1-deploy-with-notification | ++ head -1' - ' logger.go:42: 06:58:44 | watcher-notification/1-deploy-with-notification | ++ grep -v ''^$''' - ' logger.go:42: 06:58:44 | watcher-notification/1-deploy-with-notification | + APIPOD=' - ' logger.go:42: 06:58:45 | watcher-notification/1-deploy-with-notification | running command: [sh -c set -euxo pipefail' - ' oc project watcher-kuttl-default' - ' APIPOD=$(oc get pods -n watcher-kuttl-default -l "service=watcher-api" -ocustom-columns=:metadata.name|grep -v ^$|head -1)' - ' if [ -n "${APIPOD}" ]; then' - ' [ $(echo $(oc rsh -c watcher-api ${APIPOD} cat /etc/watcher/watcher.conf.d/00-default.conf) |grep -czPo ''\[oslo_messaging_notifications\]\s+driver\s+=\s+messagingv2\s+transport_url\s+='') == 1 ]' - ' else' - ' exit 1' - ' fi' - ' ]' - ' logger.go:42: 06:58:45 | watcher-notification/1-deploy-with-notification | + oc project watcher-kuttl-default' - ' logger.go:42: 06:58:46 | watcher-notification/1-deploy-with-notification | Already on project "watcher-kuttl-default" on server "https://api.crc.testing:6443".' - ' logger.go:42: 06:58:46 | watcher-notification/1-deploy-with-notification | ++ head -1' - ' logger.go:42: 06:58:46 | watcher-notification/1-deploy-with-notification | ++ oc get pods -n watcher-kuttl-default -l service=watcher-api -ocustom-columns=:metadata.name' - ' logger.go:42: 06:58:46 | watcher-notification/1-deploy-with-notification | ++ grep -v ''^$''' - ' logger.go:42: 06:58:46 | watcher-notification/1-deploy-with-notification | + APIPOD=' - ' logger.go:42: 06:58:47 | watcher-notification/1-deploy-with-notification | running command: [sh -c set -euxo pipefail' - ' oc project watcher-kuttl-default' - ' APIPOD=$(oc get pods -n watcher-kuttl-default -l "service=watcher-api" -ocustom-columns=:metadata.name|grep -v ^$|head -1)' - ' if [ -n "${APIPOD}" ]; then' - ' [ $(echo $(oc rsh -c watcher-api ${APIPOD} cat /etc/watcher/watcher.conf.d/00-default.conf) |grep -czPo ''\[oslo_messaging_notifications\]\s+driver\s+=\s+messagingv2\s+transport_url\s+='') == 1 ]' - ' else' - ' exit 1' - ' fi' - ' ]' - ' logger.go:42: 06:58:47 | watcher-notification/1-deploy-with-notification | + oc project watcher-kuttl-default' - ' logger.go:42: 06:58:47 | watcher-notification/1-deploy-with-notification | Already on project "watcher-kuttl-default" on server "https://api.crc.testing:6443".' - ' logger.go:42: 06:58:47 | watcher-notification/1-deploy-with-notification | ++ oc get pods -n watcher-kuttl-default -l service=watcher-api -ocustom-columns=:metadata.name' - ' logger.go:42: 06:58:47 | watcher-notification/1-deploy-with-notification | ++ head -1' - ' logger.go:42: 06:58:47 | watcher-notification/1-deploy-with-notification | ++ grep -v ''^$''' - ' logger.go:42: 06:58:47 | watcher-notification/1-deploy-with-notification | + APIPOD=' - ' logger.go:42: 06:58:48 | watcher-notification/1-deploy-with-notification | running command: [sh -c set -euxo pipefail' - ' oc project watcher-kuttl-default' - ' APIPOD=$(oc get pods -n watcher-kuttl-default -l "service=watcher-api" -ocustom-columns=:metadata.name|grep -v ^$|head -1)' - ' if [ -n "${APIPOD}" ]; then' - ' [ $(echo $(oc rsh -c watcher-api ${APIPOD} cat /etc/watcher/watcher.conf.d/00-default.conf) |grep -czPo ''\[oslo_messaging_notifications\]\s+driver\s+=\s+messagingv2\s+transport_url\s+='') == 1 ]' - ' else' - ' exit 1' - ' fi' - ' ]' - ' logger.go:42: 06:58:48 | watcher-notification/1-deploy-with-notification | + oc project watcher-kuttl-default' - ' logger.go:42: 06:58:49 | watcher-notification/1-deploy-with-notification | Already on project "watcher-kuttl-default" on server "https://api.crc.testing:6443".' - ' logger.go:42: 06:58:49 | watcher-notification/1-deploy-with-notification | ++ oc get pods -n watcher-kuttl-default -l service=watcher-api -ocustom-columns=:metadata.name' - ' logger.go:42: 06:58:49 | watcher-notification/1-deploy-with-notification | ++ head -1' - ' logger.go:42: 06:58:49 | watcher-notification/1-deploy-with-notification | ++ grep -v ''^$''' - ' logger.go:42: 06:58:49 | watcher-notification/1-deploy-with-notification | + APIPOD=' - ' logger.go:42: 06:58:50 | watcher-notification/1-deploy-with-notification | running command: [sh -c set -euxo pipefail' - ' oc project watcher-kuttl-default' - ' APIPOD=$(oc get pods -n watcher-kuttl-default -l "service=watcher-api" -ocustom-columns=:metadata.name|grep -v ^$|head -1)' - ' if [ -n "${APIPOD}" ]; then' - ' [ $(echo $(oc rsh -c watcher-api ${APIPOD} cat /etc/watcher/watcher.conf.d/00-default.conf) |grep -czPo ''\[oslo_messaging_notifications\]\s+driver\s+=\s+messagingv2\s+transport_url\s+='') == 1 ]' - ' else' - ' exit 1' - ' fi' - ' ]' - ' logger.go:42: 06:58:50 | watcher-notification/1-deploy-with-notification | + oc project watcher-kuttl-default' - ' logger.go:42: 06:58:50 | watcher-notification/1-deploy-with-notification | Already on project "watcher-kuttl-default" on server "https://api.crc.testing:6443".' - ' logger.go:42: 06:58:50 | watcher-notification/1-deploy-with-notification | ++ oc get pods -n watcher-kuttl-default -l service=watcher-api -ocustom-columns=:metadata.name' - ' logger.go:42: 06:58:50 | watcher-notification/1-deploy-with-notification | ++ head -1' - ' logger.go:42: 06:58:50 | watcher-notification/1-deploy-with-notification | ++ grep -v ''^$''' - ' logger.go:42: 06:58:50 | watcher-notification/1-deploy-with-notification | + APIPOD=' - ' logger.go:42: 06:58:51 | watcher-notification/1-deploy-with-notification | running command: [sh -c set -euxo pipefail' - ' oc project watcher-kuttl-default' - ' APIPOD=$(oc get pods -n watcher-kuttl-default -l "service=watcher-api" -ocustom-columns=:metadata.name|grep -v ^$|head -1)' - ' if [ -n "${APIPOD}" ]; then' - ' [ $(echo $(oc rsh -c watcher-api ${APIPOD} cat /etc/watcher/watcher.conf.d/00-default.conf) |grep -czPo ''\[oslo_messaging_notifications\]\s+driver\s+=\s+messagingv2\s+transport_url\s+='') == 1 ]' - ' else' - ' exit 1' - ' fi' - ' ]' - ' logger.go:42: 06:58:51 | watcher-notification/1-deploy-with-notification | + oc project watcher-kuttl-default' - ' logger.go:42: 06:58:51 | watcher-notification/1-deploy-with-notification | Already on project "watcher-kuttl-default" on server "https://api.crc.testing:6443".' - ' logger.go:42: 06:58:51 | watcher-notification/1-deploy-with-notification | ++ oc get pods -n watcher-kuttl-default -l service=watcher-api -ocustom-columns=:metadata.name' - ' logger.go:42: 06:58:51 | watcher-notification/1-deploy-with-notification | ++ head -1' - ' logger.go:42: 06:58:51 | watcher-notification/1-deploy-with-notification | ++ grep -v ''^$''' - ' logger.go:42: 06:58:52 | watcher-notification/1-deploy-with-notification | + APIPOD=' - ' logger.go:42: 06:58:53 | watcher-notification/1-deploy-with-notification | running command: [sh -c set -euxo pipefail' - ' oc project watcher-kuttl-default' - ' APIPOD=$(oc get pods -n watcher-kuttl-default -l "service=watcher-api" -ocustom-columns=:metadata.name|grep -v ^$|head -1)' - ' if [ -n "${APIPOD}" ]; then' - ' [ $(echo $(oc rsh -c watcher-api ${APIPOD} cat /etc/watcher/watcher.conf.d/00-default.conf) |grep -czPo ''\[oslo_messaging_notifications\]\s+driver\s+=\s+messagingv2\s+transport_url\s+='') == 1 ]' - ' else' - ' exit 1' - ' fi' - ' ]' - ' logger.go:42: 06:58:53 | watcher-notification/1-deploy-with-notification | + oc project watcher-kuttl-default' - ' logger.go:42: 06:58:53 | watcher-notification/1-deploy-with-notification | Already on project "watcher-kuttl-default" on server "https://api.crc.testing:6443".' - ' logger.go:42: 06:58:53 | watcher-notification/1-deploy-with-notification | ++ oc get pods -n watcher-kuttl-default -l service=watcher-api -ocustom-columns=:metadata.name' - ' logger.go:42: 06:58:53 | watcher-notification/1-deploy-with-notification | ++ head -1' - ' logger.go:42: 06:58:53 | watcher-notification/1-deploy-with-notification | ++ grep -v ''^$''' - ' logger.go:42: 06:58:53 | watcher-notification/1-deploy-with-notification | + APIPOD=' - ' logger.go:42: 06:58:54 | watcher-notification/1-deploy-with-notification | running command: [sh -c set -euxo pipefail' - ' oc project watcher-kuttl-default' - ' APIPOD=$(oc get pods -n watcher-kuttl-default -l "service=watcher-api" -ocustom-columns=:metadata.name|grep -v ^$|head -1)' - ' if [ -n "${APIPOD}" ]; then' - ' [ $(echo $(oc rsh -c watcher-api ${APIPOD} cat /etc/watcher/watcher.conf.d/00-default.conf) |grep -czPo ''\[oslo_messaging_notifications\]\s+driver\s+=\s+messagingv2\s+transport_url\s+='') == 1 ]' - ' else' - ' exit 1' - ' fi' - ' ]' - ' logger.go:42: 06:58:54 | watcher-notification/1-deploy-with-notification | + oc project watcher-kuttl-default' - ' logger.go:42: 06:58:54 | watcher-notification/1-deploy-with-notification | Already on project "watcher-kuttl-default" on server "https://api.crc.testing:6443".' - ' logger.go:42: 06:58:54 | watcher-notification/1-deploy-with-notification | ++ oc get pods -n watcher-kuttl-default -l service=watcher-api -ocustom-columns=:metadata.name' - ' logger.go:42: 06:58:54 | watcher-notification/1-deploy-with-notification | ++ head -1' - ' logger.go:42: 06:58:54 | watcher-notification/1-deploy-with-notification | ++ grep -v ''^$''' - ' logger.go:42: 06:58:54 | watcher-notification/1-deploy-with-notification | + APIPOD=' - ' logger.go:42: 06:58:55 | watcher-notification/1-deploy-with-notification | running command: [sh -c set -euxo pipefail' - ' oc project watcher-kuttl-default' - ' APIPOD=$(oc get pods -n watcher-kuttl-default -l "service=watcher-api" -ocustom-columns=:metadata.name|grep -v ^$|head -1)' - ' if [ -n "${APIPOD}" ]; then' - ' [ $(echo $(oc rsh -c watcher-api ${APIPOD} cat /etc/watcher/watcher.conf.d/00-default.conf) |grep -czPo ''\[oslo_messaging_notifications\]\s+driver\s+=\s+messagingv2\s+transport_url\s+='') == 1 ]' - ' else' - ' exit 1' - ' fi' - ' ]' - ' logger.go:42: 06:58:55 | watcher-notification/1-deploy-with-notification | + oc project watcher-kuttl-default' - ' logger.go:42: 06:58:55 | watcher-notification/1-deploy-with-notification | Already on project "watcher-kuttl-default" on server "https://api.crc.testing:6443".' - ' logger.go:42: 06:58:55 | watcher-notification/1-deploy-with-notification | ++ grep -v ''^$''' - ' logger.go:42: 06:58:55 | watcher-notification/1-deploy-with-notification | ++ head -1' - ' logger.go:42: 06:58:55 | watcher-notification/1-deploy-with-notification | ++ oc get pods -n watcher-kuttl-default -l service=watcher-api -ocustom-columns=:metadata.name' - ' logger.go:42: 06:58:56 | watcher-notification/1-deploy-with-notification | + APIPOD=' - ' logger.go:42: 06:58:57 | watcher-notification/1-deploy-with-notification | running command: [sh -c set -euxo pipefail' - ' oc project watcher-kuttl-default' - ' APIPOD=$(oc get pods -n watcher-kuttl-default -l "service=watcher-api" -ocustom-columns=:metadata.name|grep -v ^$|head -1)' - ' if [ -n "${APIPOD}" ]; then' - ' [ $(echo $(oc rsh -c watcher-api ${APIPOD} cat /etc/watcher/watcher.conf.d/00-default.conf) |grep -czPo ''\[oslo_messaging_notifications\]\s+driver\s+=\s+messagingv2\s+transport_url\s+='') == 1 ]' - ' else' - ' exit 1' - ' fi' - ' ]' - ' logger.go:42: 06:58:57 | watcher-notification/1-deploy-with-notification | + oc project watcher-kuttl-default' - ' logger.go:42: 06:58:57 | watcher-notification/1-deploy-with-notification | Already on project "watcher-kuttl-default" on server "https://api.crc.testing:6443".' - ' logger.go:42: 06:58:57 | watcher-notification/1-deploy-with-notification | ++ oc get pods -n watcher-kuttl-default -l service=watcher-api -ocustom-columns=:metadata.name' - ' logger.go:42: 06:58:57 | watcher-notification/1-deploy-with-notification | ++ head -1' - ' logger.go:42: 06:58:57 | watcher-notification/1-deploy-with-notification | ++ grep -v ''^$''' - ' logger.go:42: 06:58:57 | watcher-notification/1-deploy-with-notification | + APIPOD=' - ' logger.go:42: 06:58:58 | watcher-notification/1-deploy-with-notification | running command: [sh -c set -euxo pipefail' - ' oc project watcher-kuttl-default' - ' APIPOD=$(oc get pods -n watcher-kuttl-default -l "service=watcher-api" -ocustom-columns=:metadata.name|grep -v ^$|head -1)' - ' if [ -n "${APIPOD}" ]; then' - ' [ $(echo $(oc rsh -c watcher-api ${APIPOD} cat /etc/watcher/watcher.conf.d/00-default.conf) |grep -czPo ''\[oslo_messaging_notifications\]\s+driver\s+=\s+messagingv2\s+transport_url\s+='') == 1 ]' - ' else' - ' exit 1' - ' fi' - ' ]' - ' logger.go:42: 06:58:58 | watcher-notification/1-deploy-with-notification | + oc project watcher-kuttl-default' - ' logger.go:42: 06:58:58 | watcher-notification/1-deploy-with-notification | Already on project "watcher-kuttl-default" on server "https://api.crc.testing:6443".' - ' logger.go:42: 06:58:58 | watcher-notification/1-deploy-with-notification | ++ oc get pods -n watcher-kuttl-default -l service=watcher-api -ocustom-columns=:metadata.name' - ' logger.go:42: 06:58:58 | watcher-notification/1-deploy-with-notification | ++ head -1' - ' logger.go:42: 06:58:58 | watcher-notification/1-deploy-with-notification | ++ grep -v ''^$''' - ' logger.go:42: 06:58:58 | watcher-notification/1-deploy-with-notification | + APIPOD=' - ' logger.go:42: 06:58:59 | watcher-notification/1-deploy-with-notification | running command: [sh -c set -euxo pipefail' - ' oc project watcher-kuttl-default' - ' APIPOD=$(oc get pods -n watcher-kuttl-default -l "service=watcher-api" -ocustom-columns=:metadata.name|grep -v ^$|head -1)' - ' if [ -n "${APIPOD}" ]; then' - ' [ $(echo $(oc rsh -c watcher-api ${APIPOD} cat /etc/watcher/watcher.conf.d/00-default.conf) |grep -czPo ''\[oslo_messaging_notifications\]\s+driver\s+=\s+messagingv2\s+transport_url\s+='') == 1 ]' - ' else' - ' exit 1' - ' fi' - ' ]' - ' logger.go:42: 06:58:59 | watcher-notification/1-deploy-with-notification | + oc project watcher-kuttl-default' - ' logger.go:42: 06:59:00 | watcher-notification/1-deploy-with-notification | Already on project "watcher-kuttl-default" on server "https://api.crc.testing:6443".' - ' logger.go:42: 06:59:00 | watcher-notification/1-deploy-with-notification | ++ oc get pods -n watcher-kuttl-default -l service=watcher-api -ocustom-columns=:metadata.name' - ' logger.go:42: 06:59:00 | watcher-notification/1-deploy-with-notification | ++ head -1' - ' logger.go:42: 06:59:00 | watcher-notification/1-deploy-with-notification | ++ grep -v ''^$''' - ' logger.go:42: 06:59:00 | watcher-notification/1-deploy-with-notification | + APIPOD=' - ' logger.go:42: 06:59:01 | watcher-notification/1-deploy-with-notification | running command: [sh -c set -euxo pipefail' - ' oc project watcher-kuttl-default' - ' APIPOD=$(oc get pods -n watcher-kuttl-default -l "service=watcher-api" -ocustom-columns=:metadata.name|grep -v ^$|head -1)' - ' if [ -n "${APIPOD}" ]; then' - ' [ $(echo $(oc rsh -c watcher-api ${APIPOD} cat /etc/watcher/watcher.conf.d/00-default.conf) |grep -czPo ''\[oslo_messaging_notifications\]\s+driver\s+=\s+messagingv2\s+transport_url\s+='') == 1 ]' - ' else' - ' exit 1' - ' fi' - ' ]' - ' logger.go:42: 06:59:01 | watcher-notification/1-deploy-with-notification | + oc project watcher-kuttl-default' - ' logger.go:42: 06:59:01 | watcher-notification/1-deploy-with-notification | Already on project "watcher-kuttl-default" on server "https://api.crc.testing:6443".' - ' logger.go:42: 06:59:01 | watcher-notification/1-deploy-with-notification | ++ oc get pods -n watcher-kuttl-default -l service=watcher-api -ocustom-columns=:metadata.name' - ' logger.go:42: 06:59:01 | watcher-notification/1-deploy-with-notification | ++ head -1' - ' logger.go:42: 06:59:01 | watcher-notification/1-deploy-with-notification | ++ grep -v ''^$''' - ' logger.go:42: 06:59:01 | watcher-notification/1-deploy-with-notification | + APIPOD=' - ' logger.go:42: 06:59:02 | watcher-notification/1-deploy-with-notification | running command: [sh -c set -euxo pipefail' - ' oc project watcher-kuttl-default' - ' APIPOD=$(oc get pods -n watcher-kuttl-default -l "service=watcher-api" -ocustom-columns=:metadata.name|grep -v ^$|head -1)' - ' if [ -n "${APIPOD}" ]; then' - ' [ $(echo $(oc rsh -c watcher-api ${APIPOD} cat /etc/watcher/watcher.conf.d/00-default.conf) |grep -czPo ''\[oslo_messaging_notifications\]\s+driver\s+=\s+messagingv2\s+transport_url\s+='') == 1 ]' - ' else' - ' exit 1' - ' fi' - ' ]' - ' logger.go:42: 06:59:02 | watcher-notification/1-deploy-with-notification | + oc project watcher-kuttl-default' - ' logger.go:42: 06:59:03 | watcher-notification/1-deploy-with-notification | Already on project "watcher-kuttl-default" on server "https://api.crc.testing:6443".' - ' logger.go:42: 06:59:03 | watcher-notification/1-deploy-with-notification | ++ oc get pods -n watcher-kuttl-default -l service=watcher-api -ocustom-columns=:metadata.name' - ' logger.go:42: 06:59:03 | watcher-notification/1-deploy-with-notification | ++ head -1' - ' logger.go:42: 06:59:03 | watcher-notification/1-deploy-with-notification | ++ grep -v ''^$''' - ' logger.go:42: 06:59:03 | watcher-notification/1-deploy-with-notification | + APIPOD=' - ' logger.go:42: 06:59:04 | watcher-notification/1-deploy-with-notification | running command: [sh -c set -euxo pipefail' - ' oc project watcher-kuttl-default' - ' APIPOD=$(oc get pods -n watcher-kuttl-default -l "service=watcher-api" -ocustom-columns=:metadata.name|grep -v ^$|head -1)' - ' if [ -n "${APIPOD}" ]; then' - ' [ $(echo $(oc rsh -c watcher-api ${APIPOD} cat /etc/watcher/watcher.conf.d/00-default.conf) |grep -czPo ''\[oslo_messaging_notifications\]\s+driver\s+=\s+messagingv2\s+transport_url\s+='') == 1 ]' - ' else' - ' exit 1' - ' fi' - ' ]' - ' logger.go:42: 06:59:04 | watcher-notification/1-deploy-with-notification | + oc project watcher-kuttl-default' - ' logger.go:42: 06:59:04 | watcher-notification/1-deploy-with-notification | Already on project "watcher-kuttl-default" on server "https://api.crc.testing:6443".' - ' logger.go:42: 06:59:04 | watcher-notification/1-deploy-with-notification | ++ oc get pods -n watcher-kuttl-default -l service=watcher-api -ocustom-columns=:metadata.name' - ' logger.go:42: 06:59:04 | watcher-notification/1-deploy-with-notification | ++ head -1' - ' logger.go:42: 06:59:04 | watcher-notification/1-deploy-with-notification | ++ grep -v ''^$''' - ' logger.go:42: 06:59:04 | watcher-notification/1-deploy-with-notification | + APIPOD=' - ' logger.go:42: 06:59:05 | watcher-notification/1-deploy-with-notification | running command: [sh -c set -euxo pipefail' - ' oc project watcher-kuttl-default' - ' APIPOD=$(oc get pods -n watcher-kuttl-default -l "service=watcher-api" -ocustom-columns=:metadata.name|grep -v ^$|head -1)' - ' if [ -n "${APIPOD}" ]; then' - ' [ $(echo $(oc rsh -c watcher-api ${APIPOD} cat /etc/watcher/watcher.conf.d/00-default.conf) |grep -czPo ''\[oslo_messaging_notifications\]\s+driver\s+=\s+messagingv2\s+transport_url\s+='') == 1 ]' - ' else' - ' exit 1' - ' fi' - ' ]' - ' logger.go:42: 06:59:05 | watcher-notification/1-deploy-with-notification | + oc project watcher-kuttl-default' - ' logger.go:42: 06:59:05 | watcher-notification/1-deploy-with-notification | Already on project "watcher-kuttl-default" on server "https://api.crc.testing:6443".' - ' logger.go:42: 06:59:05 | watcher-notification/1-deploy-with-notification | ++ oc get pods -n watcher-kuttl-default -l service=watcher-api -ocustom-columns=:metadata.name' - ' logger.go:42: 06:59:05 | watcher-notification/1-deploy-with-notification | ++ head -1' - ' logger.go:42: 06:59:05 | watcher-notification/1-deploy-with-notification | ++ grep -v ''^$''' - ' logger.go:42: 06:59:05 | watcher-notification/1-deploy-with-notification | + APIPOD=' - ' logger.go:42: 06:59:06 | watcher-notification/1-deploy-with-notification | running command: [sh -c set -euxo pipefail' - ' oc project watcher-kuttl-default' - ' APIPOD=$(oc get pods -n watcher-kuttl-default -l "service=watcher-api" -ocustom-columns=:metadata.name|grep -v ^$|head -1)' - ' if [ -n "${APIPOD}" ]; then' - ' [ $(echo $(oc rsh -c watcher-api ${APIPOD} cat /etc/watcher/watcher.conf.d/00-default.conf) |grep -czPo ''\[oslo_messaging_notifications\]\s+driver\s+=\s+messagingv2\s+transport_url\s+='') == 1 ]' - ' else' - ' exit 1' - ' fi' - ' ]' - ' logger.go:42: 06:59:06 | watcher-notification/1-deploy-with-notification | + oc project watcher-kuttl-default' - ' logger.go:42: 06:59:07 | watcher-notification/1-deploy-with-notification | Already on project "watcher-kuttl-default" on server "https://api.crc.testing:6443".' - ' logger.go:42: 06:59:07 | watcher-notification/1-deploy-with-notification | ++ oc get pods -n watcher-kuttl-default -l service=watcher-api -ocustom-columns=:metadata.name' - ' logger.go:42: 06:59:07 | watcher-notification/1-deploy-with-notification | ++ head -1' - ' logger.go:42: 06:59:07 | watcher-notification/1-deploy-with-notification | ++ grep -v ''^$''' - ' logger.go:42: 06:59:07 | watcher-notification/1-deploy-with-notification | + APIPOD=' - ' logger.go:42: 06:59:08 | watcher-notification/1-deploy-with-notification | running command: [sh -c set -euxo pipefail' - ' oc project watcher-kuttl-default' - ' APIPOD=$(oc get pods -n watcher-kuttl-default -l "service=watcher-api" -ocustom-columns=:metadata.name|grep -v ^$|head -1)' - ' if [ -n "${APIPOD}" ]; then' - ' [ $(echo $(oc rsh -c watcher-api ${APIPOD} cat /etc/watcher/watcher.conf.d/00-default.conf) |grep -czPo ''\[oslo_messaging_notifications\]\s+driver\s+=\s+messagingv2\s+transport_url\s+='') == 1 ]' - ' else' - ' exit 1' - ' fi' - ' ]' - ' logger.go:42: 06:59:08 | watcher-notification/1-deploy-with-notification | + oc project watcher-kuttl-default' - ' logger.go:42: 06:59:08 | watcher-notification/1-deploy-with-notification | Already on project "watcher-kuttl-default" on server "https://api.crc.testing:6443".' - ' logger.go:42: 06:59:08 | watcher-notification/1-deploy-with-notification | ++ oc get pods -n watcher-kuttl-default -l service=watcher-api -ocustom-columns=:metadata.name' - ' logger.go:42: 06:59:08 | watcher-notification/1-deploy-with-notification | ++ grep -v ''^$''' - ' logger.go:42: 06:59:08 | watcher-notification/1-deploy-with-notification | ++ head -1' - ' logger.go:42: 06:59:08 | watcher-notification/1-deploy-with-notification | + APIPOD=' - ' logger.go:42: 06:59:09 | watcher-notification/1-deploy-with-notification | running command: [sh -c set -euxo pipefail' - ' oc project watcher-kuttl-default' - ' APIPOD=$(oc get pods -n watcher-kuttl-default -l "service=watcher-api" -ocustom-columns=:metadata.name|grep -v ^$|head -1)' - ' if [ -n "${APIPOD}" ]; then' - ' [ $(echo $(oc rsh -c watcher-api ${APIPOD} cat /etc/watcher/watcher.conf.d/00-default.conf) |grep -czPo ''\[oslo_messaging_notifications\]\s+driver\s+=\s+messagingv2\s+transport_url\s+='') == 1 ]' - ' else' - ' exit 1' - ' fi' - ' ]' - ' logger.go:42: 06:59:09 | watcher-notification/1-deploy-with-notification | + oc project watcher-kuttl-default' - ' logger.go:42: 06:59:10 | watcher-notification/1-deploy-with-notification | Already on project "watcher-kuttl-default" on server "https://api.crc.testing:6443".' - ' logger.go:42: 06:59:10 | watcher-notification/1-deploy-with-notification | ++ oc get pods -n watcher-kuttl-default -l service=watcher-api -ocustom-columns=:metadata.name' - ' logger.go:42: 06:59:10 | watcher-notification/1-deploy-with-notification | ++ head -1' - ' logger.go:42: 06:59:10 | watcher-notification/1-deploy-with-notification | ++ grep -v ''^$''' - ' logger.go:42: 06:59:10 | watcher-notification/1-deploy-with-notification | + APIPOD=watcher-kuttl-api-0' - ' logger.go:42: 06:59:10 | watcher-notification/1-deploy-with-notification | + ''['' -n watcher-kuttl-api-0 '']''' - ' logger.go:42: 06:59:10 | watcher-notification/1-deploy-with-notification | ++ grep -czPo ''\[oslo_messaging_notifications\]\s+driver\s+=\s+messagingv2\s+transport_url\s+=''' - ' logger.go:42: 06:59:10 | watcher-notification/1-deploy-with-notification | +++ oc rsh -c watcher-api watcher-kuttl-api-0 cat /etc/watcher/watcher.conf.d/00-default.conf' - ' logger.go:42: 06:59:10 | watcher-notification/1-deploy-with-notification | ++ echo ''[DEFAULT]'' state_path = /var/lib/watcher transport_url = ''rabbit://**********=1'' control_exchange = watcher debug = True log_file = /var/log/watcher/watcher-kuttl-api.log ''#'' empty notification_level means that no notification will be sent notification_level = ''[database]'' connection = ''mysql+pymysql://watcher_49c6:586030fce1b9af1d5684354c0bd591b5@openstack.watcher-kuttl-default.svc/watcher?read_default_file=/etc/my.cnf'' ''[oslo_policy]'' policy_file = /etc/watcher/policy.yaml.sample ''[oslo_messaging_notifications]'' driver = messagingv2 transport_url = ''rabbit://**********=1'' ''[oslo_messaging_rabbit]'' rabbit_quorum_queue=true rabbit_transient_quorum_queue=true amqp_durable_queues=true ''[keystone_authtoken]'' memcached_servers=memcached-0.memcached.watcher-kuttl-default.svc:11212 project_domain_name = Default project_name = service user_domain_name = Default password = pa**********rd username = watcher auth_url = https://keystone-internal.watcher-kuttl-default.svc:5000 interface = internal auth_type = password cafile = /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem ''[watcher_clients_auth]'' project_domain_name = Default project_name = service user_domain_name = Default password = pa**********rd username = watcher auth_url = https://keystone-internal.watcher-kuttl-default.svc:5000 interface = internal auth_type = password cafile = /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem ''[oslo_concurrency]'' lock_path = /var/lib/watcher/tmp ''[watcher_datasources]'' datasources = prometheus ''[cache]'' backend = oslo_cache.memcache_pool memcache_servers=memcached-0.memcached.watcher-kuttl-default.svc:11212 memcache_socket_timeout = 0.5 memcache_pool_connection_get_timeout = 1 enabled=true tls_enabled=true memcache_dead_retry = 30 ''[prometheus_client]'' host = metric-storage-prometheus.watcher-kuttl-default.svc port = 9090 cafile = /etc/pki/ca-trust/extracted/pem/prometheus/ca.crt ''[cinder_client]'' endpoint_type = internal ''[glance_client]'' endpoint_type = internal ''[ironic_client]'' endpoint_type = internal ''[keystone_client]'' interface = internal ''[neutron_client]'' endpoint_type = internal ''[nova_client]'' endpoint_type = internal ''[placement_client]'' interface = internal ''[watcher_cluster_data_model_collectors.compute]'' period = 900 ''[watcher_cluster_data_model_collectors.baremetal]'' period = 900 ''[watcher_cluster_data_model_collectors.storage]'' period = 900' - ' logger.go:42: 06:59:10 | watcher-notification/1-deploy-with-notification | + ''['' 1 == 1 '']''' - ' logger.go:42: 06:59:11 | watcher-notification/1-deploy-with-notification | running command: [sh -c set -euxo pipefail' - ' oc project watcher-kuttl-default' - ' APIPOD=$(oc get pods -n watcher-kuttl-default -l "service=watcher-api" -ocustom-columns=:metadata.name|grep -v ^$|head -1)' - ' if [ -n "${APIPOD}" ]; then' - ' [ $(echo $(oc rsh -c watcher-api ${APIPOD} cat /etc/watcher/watcher.conf.d/00-default.conf) |grep -czPo ''\[oslo_messaging_notifications\]\s+driver\s+=\s+messagingv2\s+transport_url\s+='') == 1 ]' - ' else' - ' exit 1' - ' fi' - ' ]' - ' logger.go:42: 06:59:11 | watcher-notification/1-deploy-with-notification | + oc project watcher-kuttl-default' - ' logger.go:42: 06:59:11 | watcher-notification/1-deploy-with-notification | Already on project "watcher-kuttl-default" on server "https://api.crc.testing:6443".' - ' logger.go:42: 06:59:11 | watcher-notification/1-deploy-with-notification | ++ oc get pods -n watcher-kuttl-default -l service=watcher-api -ocustom-columns=:metadata.name' - ' logger.go:42: 06:59:11 | watcher-notification/1-deploy-with-notification | ++ head -1' - ' logger.go:42: 06:59:11 | watcher-notification/1-deploy-with-notification | ++ grep -v ''^$''' - ' logger.go:42: 06:59:11 | watcher-notification/1-deploy-with-notification | + APIPOD=watcher-kuttl-api-0' - ' logger.go:42: 06:59:11 | watcher-notification/1-deploy-with-notification | + ''['' -n watcher-kuttl-api-0 '']''' - ' logger.go:42: 06:59:11 | watcher-notification/1-deploy-with-notification | ++ grep -czPo ''\[oslo_messaging_notifications\]\s+driver\s+=\s+messagingv2\s+transport_url\s+=''' - ' logger.go:42: 06:59:11 | watcher-notification/1-deploy-with-notification | +++ oc rsh -c watcher-api watcher-kuttl-api-0 cat /etc/watcher/watcher.conf.d/00-default.conf' - ' logger.go:42: 06:59:12 | watcher-notification/1-deploy-with-notification | ++ echo ''[DEFAULT]'' state_path = /var/lib/watcher transport_url = ''rabbit://**********=1'' control_exchange = watcher debug = True log_file = /var/log/watcher/watcher-kuttl-api.log ''#'' empty notification_level means that no notification will be sent notification_level = ''[database]'' connection = ''mysql+pymysql://watcher_49c6:586030fce1b9af1d5684354c0bd591b5@openstack.watcher-kuttl-default.svc/watcher?read_default_file=/etc/my.cnf'' ''[oslo_policy]'' policy_file = /etc/watcher/policy.yaml.sample ''[oslo_messaging_notifications]'' driver = messagingv2 transport_url = ''rabbit://**********=1'' ''[oslo_messaging_rabbit]'' rabbit_quorum_queue=true rabbit_transient_quorum_queue=true amqp_durable_queues=true ''[keystone_authtoken]'' memcached_servers=memcached-0.memcached.watcher-kuttl-default.svc:11212 project_domain_name = Default project_name = service user_domain_name = Default password = pa**********rd username = watcher auth_url = https://keystone-internal.watcher-kuttl-default.svc:5000 interface = internal auth_type = password cafile = /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem ''[watcher_clients_auth]'' project_domain_name = Default project_name = service user_domain_name = Default password = pa**********rd username = watcher auth_url = https://keystone-internal.watcher-kuttl-default.svc:5000 interface = internal auth_type = password cafile = /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem ''[oslo_concurrency]'' lock_path = /var/lib/watcher/tmp ''[watcher_datasources]'' datasources = prometheus ''[cache]'' backend = oslo_cache.memcache_pool memcache_servers=memcached-0.memcached.watcher-kuttl-default.svc:11212 memcache_socket_timeout = 0.5 memcache_pool_connection_get_timeout = 1 enabled=true tls_enabled=true memcache_dead_retry = 30 ''[prometheus_client]'' host = metric-storage-prometheus.watcher-kuttl-default.svc port = 9090 cafile = /etc/pki/ca-trust/extracted/pem/prometheus/ca.crt ''[cinder_client]'' endpoint_type = internal ''[glance_client]'' endpoint_type = internal ''[ironic_client]'' endpoint_type = internal ''[keystone_client]'' interface = internal ''[neutron_client]'' endpoint_type = internal ''[nova_client]'' endpoint_type = internal ''[placement_client]'' interface = internal ''[watcher_cluster_data_model_collectors.compute]'' period = 900 ''[watcher_cluster_data_model_collectors.baremetal]'' period = 900 ''[watcher_cluster_data_model_collectors.storage]'' period = 900' - ' logger.go:42: 06:59:12 | watcher-notification/1-deploy-with-notification | + ''['' 1 == 1 '']''' - ' logger.go:42: 06:59:13 | watcher-notification/1-deploy-with-notification | running command: [sh -c set -euxo pipefail' - ' oc project watcher-kuttl-default' - ' APIPOD=$(oc get pods -n watcher-kuttl-default -l "service=watcher-api" -ocustom-columns=:metadata.name|grep -v ^$|head -1)' - ' if [ -n "${APIPOD}" ]; then' - ' [ $(echo $(oc rsh -c watcher-api ${APIPOD} cat /etc/watcher/watcher.conf.d/00-default.conf) |grep -czPo ''\[oslo_messaging_notifications\]\s+driver\s+=\s+messagingv2\s+transport_url\s+='') == 1 ]' - ' else' - ' exit 1' - ' fi' - ' ]' - ' logger.go:42: 06:59:13 | watcher-notification/1-deploy-with-notification | + oc project watcher-kuttl-default' - ' logger.go:42: 06:59:13 | watcher-notification/1-deploy-with-notification | Already on project "watcher-kuttl-default" on server "https://api.crc.testing:6443".' - ' logger.go:42: 06:59:13 | watcher-notification/1-deploy-with-notification | ++ oc get pods -n watcher-kuttl-default -l service=watcher-api -ocustom-columns=:metadata.name' - ' logger.go:42: 06:59:13 | watcher-notification/1-deploy-with-notification | ++ head -1' - ' logger.go:42: 06:59:13 | watcher-notification/1-deploy-with-notification | ++ grep -v ''^$''' - ' logger.go:42: 06:59:13 | watcher-notification/1-deploy-with-notification | + APIPOD=watcher-kuttl-api-0' - ' logger.go:42: 06:59:13 | watcher-notification/1-deploy-with-notification | + ''['' -n watcher-kuttl-api-0 '']''' - ' logger.go:42: 06:59:13 | watcher-notification/1-deploy-with-notification | ++ grep -czPo ''\[oslo_messaging_notifications\]\s+driver\s+=\s+messagingv2\s+transport_url\s+=''' - ' logger.go:42: 06:59:13 | watcher-notification/1-deploy-with-notification | +++ oc rsh -c watcher-api watcher-kuttl-api-0 cat /etc/watcher/watcher.conf.d/00-default.conf' - ' logger.go:42: 06:59:13 | watcher-notification/1-deploy-with-notification | ++ echo ''[DEFAULT]'' state_path = /var/lib/watcher transport_url = ''rabbit://**********=1'' control_exchange = watcher debug = True log_file = /var/log/watcher/watcher-kuttl-api.log ''#'' empty notification_level means that no notification will be sent notification_level = ''[database]'' connection = ''mysql+pymysql://watcher_49c6:586030fce1b9af1d5684354c0bd591b5@openstack.watcher-kuttl-default.svc/watcher?read_default_file=/etc/my.cnf'' ''[oslo_policy]'' policy_file = /etc/watcher/policy.yaml.sample ''[oslo_messaging_notifications]'' driver = messagingv2 transport_url = ''rabbit://**********=1'' ''[oslo_messaging_rabbit]'' rabbit_quorum_queue=true rabbit_transient_quorum_queue=true amqp_durable_queues=true ''[keystone_authtoken]'' memcached_servers=memcached-0.memcached.watcher-kuttl-default.svc:11212 project_domain_name = Default project_name = service user_domain_name = Default password = pa**********rd username = watcher auth_url = https://keystone-internal.watcher-kuttl-default.svc:5000 interface = internal auth_type = password cafile = /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem ''[watcher_clients_auth]'' project_domain_name = Default project_name = service user_domain_name = Default password = pa**********rd username = watcher auth_url = https://keystone-internal.watcher-kuttl-default.svc:5000 interface = internal auth_type = password cafile = /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem ''[oslo_concurrency]'' lock_path = /var/lib/watcher/tmp ''[watcher_datasources]'' datasources = prometheus ''[cache]'' backend = oslo_cache.memcache_pool memcache_servers=memcached-0.memcached.watcher-kuttl-default.svc:11212 memcache_socket_timeout = 0.5 memcache_pool_connection_get_timeout = 1 enabled=true tls_enabled=true memcache_dead_retry = 30 ''[prometheus_client]'' host = metric-storage-prometheus.watcher-kuttl-default.svc port = 9090 cafile = /etc/pki/ca-trust/extracted/pem/prometheus/ca.crt ''[cinder_client]'' endpoint_type = internal ''[glance_client]'' endpoint_type = internal ''[ironic_client]'' endpoint_type = internal ''[keystone_client]'' interface = internal ''[neutron_client]'' endpoint_type = internal ''[nova_client]'' endpoint_type = internal ''[placement_client]'' interface = internal ''[watcher_cluster_data_model_collectors.compute]'' period = 900 ''[watcher_cluster_data_model_collectors.baremetal]'' period = 900 ''[watcher_cluster_data_model_collectors.storage]'' period = 900' - ' logger.go:42: 06:59:13 | watcher-notification/1-deploy-with-notification | + ''['' 1 == 1 '']''' - ' logger.go:42: 06:59:15 | watcher-notification/1-deploy-with-notification | running command: [sh -c set -euxo pipefail' - ' oc project watcher-kuttl-default' - ' APIPOD=$(oc get pods -n watcher-kuttl-default -l "service=watcher-api" -ocustom-columns=:metadata.name|grep -v ^$|head -1)' - ' if [ -n "${APIPOD}" ]; then' - ' [ $(echo $(oc rsh -c watcher-api ${APIPOD} cat /etc/watcher/watcher.conf.d/00-default.conf) |grep -czPo ''\[oslo_messaging_notifications\]\s+driver\s+=\s+messagingv2\s+transport_url\s+='') == 1 ]' - ' else' - ' exit 1' - ' fi' - ' ]' - ' logger.go:42: 06:59:15 | watcher-notification/1-deploy-with-notification | + oc project watcher-kuttl-default' - ' logger.go:42: 06:59:15 | watcher-notification/1-deploy-with-notification | Already on project "watcher-kuttl-default" on server "https://api.crc.testing:6443".' - ' logger.go:42: 06:59:15 | watcher-notification/1-deploy-with-notification | ++ oc get pods -n watcher-kuttl-default -l service=watcher-api -ocustom-columns=:metadata.name' - ' logger.go:42: 06:59:15 | watcher-notification/1-deploy-with-notification | ++ head -1' - ' logger.go:42: 06:59:15 | watcher-notification/1-deploy-with-notification | ++ grep -v ''^$''' - ' logger.go:42: 06:59:15 | watcher-notification/1-deploy-with-notification | + APIPOD=watcher-kuttl-api-0' - ' logger.go:42: 06:59:15 | watcher-notification/1-deploy-with-notification | + ''['' -n watcher-kuttl-api-0 '']''' - ' logger.go:42: 06:59:15 | watcher-notification/1-deploy-with-notification | ++ grep -czPo ''\[oslo_messaging_notifications\]\s+driver\s+=\s+messagingv2\s+transport_url\s+=''' - ' logger.go:42: 06:59:15 | watcher-notification/1-deploy-with-notification | +++ oc rsh -c watcher-api watcher-kuttl-api-0 cat /etc/watcher/watcher.conf.d/00-default.conf' - ' logger.go:42: 06:59:15 | watcher-notification/1-deploy-with-notification | ++ echo ''[DEFAULT]'' state_path = /var/lib/watcher transport_url = ''rabbit://**********=1'' control_exchange = watcher debug = True log_file = /var/log/watcher/watcher-kuttl-api.log ''#'' empty notification_level means that no notification will be sent notification_level = ''[database]'' connection = ''mysql+pymysql://watcher_49c6:586030fce1b9af1d5684354c0bd591b5@openstack.watcher-kuttl-default.svc/watcher?read_default_file=/etc/my.cnf'' ''[oslo_policy]'' policy_file = /etc/watcher/policy.yaml.sample ''[oslo_messaging_notifications]'' driver = messagingv2 transport_url = ''rabbit://**********=1'' ''[oslo_messaging_rabbit]'' rabbit_quorum_queue=true rabbit_transient_quorum_queue=true amqp_durable_queues=true ''[keystone_authtoken]'' memcached_servers=memcached-0.memcached.watcher-kuttl-default.svc:11212 project_domain_name = Default project_name = service user_domain_name = Default password = pa**********rd username = watcher auth_url = https://keystone-internal.watcher-kuttl-default.svc:5000 interface = internal auth_type = password cafile = /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem ''[watcher_clients_auth]'' project_domain_name = Default project_name = service user_domain_name = Default password = pa**********rd username = watcher auth_url = https://keystone-internal.watcher-kuttl-default.svc:5000 interface = internal auth_type = password cafile = /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem ''[oslo_concurrency]'' lock_path = /var/lib/watcher/tmp ''[watcher_datasources]'' datasources = prometheus ''[cache]'' backend = oslo_cache.memcache_pool memcache_servers=memcached-0.memcached.watcher-kuttl-default.svc:11212 memcache_socket_timeout = 0.5 memcache_pool_connection_get_timeout = 1 enabled=true tls_enabled=true memcache_dead_retry = 30 ''[prometheus_client]'' host = metric-storage-prometheus.watcher-kuttl-default.svc port = 9090 cafile = /etc/pki/ca-trust/extracted/pem/prometheus/ca.crt ''[cinder_client]'' endpoint_type = internal ''[glance_client]'' endpoint_type = internal ''[ironic_client]'' endpoint_type = internal ''[keystone_client]'' interface = internal ''[neutron_client]'' endpoint_type = internal ''[nova_client]'' endpoint_type = internal ''[placement_client]'' interface = internal ''[watcher_cluster_data_model_collectors.compute]'' period = 900 ''[watcher_cluster_data_model_collectors.baremetal]'' period = 900 ''[watcher_cluster_data_model_collectors.storage]'' period = 900' - ' logger.go:42: 06:59:15 | watcher-notification/1-deploy-with-notification | + ''['' 1 == 1 '']''' - ' logger.go:42: 06:59:16 | watcher-notification/1-deploy-with-notification | running command: [sh -c set -euxo pipefail' - ' oc project watcher-kuttl-default' - ' APIPOD=$(oc get pods -n watcher-kuttl-default -l "service=watcher-api" -ocustom-columns=:metadata.name|grep -v ^$|head -1)' - ' if [ -n "${APIPOD}" ]; then' - ' [ $(echo $(oc rsh -c watcher-api ${APIPOD} cat /etc/watcher/watcher.conf.d/00-default.conf) |grep -czPo ''\[oslo_messaging_notifications\]\s+driver\s+=\s+messagingv2\s+transport_url\s+='') == 1 ]' - ' else' - ' exit 1' - ' fi' - ' ]' - ' logger.go:42: 06:59:16 | watcher-notification/1-deploy-with-notification | + oc project watcher-kuttl-default' - ' logger.go:42: 06:59:17 | watcher-notification/1-deploy-with-notification | Already on project "watcher-kuttl-default" on server "https://api.crc.testing:6443".' - ' logger.go:42: 06:59:17 | watcher-notification/1-deploy-with-notification | ++ oc get pods -n watcher-kuttl-default -l service=watcher-api -ocustom-columns=:metadata.name' - ' logger.go:42: 06:59:17 | watcher-notification/1-deploy-with-notification | ++ head -1' - ' logger.go:42: 06:59:17 | watcher-notification/1-deploy-with-notification | ++ grep -v ''^$''' - ' logger.go:42: 06:59:17 | watcher-notification/1-deploy-with-notification | + APIPOD=watcher-kuttl-api-0' - ' logger.go:42: 06:59:17 | watcher-notification/1-deploy-with-notification | + ''['' -n watcher-kuttl-api-0 '']''' - ' logger.go:42: 06:59:17 | watcher-notification/1-deploy-with-notification | ++ grep -czPo ''\[oslo_messaging_notifications\]\s+driver\s+=\s+messagingv2\s+transport_url\s+=''' - ' logger.go:42: 06:59:17 | watcher-notification/1-deploy-with-notification | +++ oc rsh -c watcher-api watcher-kuttl-api-0 cat /etc/watcher/watcher.conf.d/00-default.conf' - ' logger.go:42: 06:59:17 | watcher-notification/1-deploy-with-notification | ++ echo ''[DEFAULT]'' state_path = /var/lib/watcher transport_url = ''rabbit://**********=1'' control_exchange = watcher debug = True log_file = /var/log/watcher/watcher-kuttl-api.log ''#'' empty notification_level means that no notification will be sent notification_level = ''[database]'' connection = ''mysql+pymysql://watcher_49c6:586030fce1b9af1d5684354c0bd591b5@openstack.watcher-kuttl-default.svc/watcher?read_default_file=/etc/my.cnf'' ''[oslo_policy]'' policy_file = /etc/watcher/policy.yaml.sample ''[oslo_messaging_notifications]'' driver = messagingv2 transport_url = ''rabbit://**********=1'' ''[oslo_messaging_rabbit]'' rabbit_quorum_queue=true rabbit_transient_quorum_queue=true amqp_durable_queues=true ''[keystone_authtoken]'' memcached_servers=memcached-0.memcached.watcher-kuttl-default.svc:11212 project_domain_name = Default project_name = service user_domain_name = Default password = pa**********rd username = watcher auth_url = https://keystone-internal.watcher-kuttl-default.svc:5000 interface = internal auth_type = password cafile = /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem ''[watcher_clients_auth]'' project_domain_name = Default project_name = service user_domain_name = Default password = pa**********rd username = watcher auth_url = https://keystone-internal.watcher-kuttl-default.svc:5000 interface = internal auth_type = password cafile = /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem ''[oslo_concurrency]'' lock_path = /var/lib/watcher/tmp ''[watcher_datasources]'' datasources = prometheus ''[cache]'' backend = oslo_cache.memcache_pool memcache_servers=memcached-0.memcached.watcher-kuttl-default.svc:11212 memcache_socket_timeout = 0.5 memcache_pool_connection_get_timeout = 1 enabled=true tls_enabled=true memcache_dead_retry = 30 ''[prometheus_client]'' host = metric-storage-prometheus.watcher-kuttl-default.svc port = 9090 cafile = /etc/pki/ca-trust/extracted/pem/prometheus/ca.crt ''[cinder_client]'' endpoint_type = internal ''[glance_client]'' endpoint_type = internal ''[ironic_client]'' endpoint_type = internal ''[keystone_client]'' interface = internal ''[neutron_client]'' endpoint_type = internal ''[nova_client]'' endpoint_type = internal ''[placement_client]'' interface = internal ''[watcher_cluster_data_model_collectors.compute]'' period = 900 ''[watcher_cluster_data_model_collectors.baremetal]'' period = 900 ''[watcher_cluster_data_model_collectors.storage]'' period = 900' - ' logger.go:42: 06:59:17 | watcher-notification/1-deploy-with-notification | + ''['' 1 == 1 '']''' - ' logger.go:42: 06:59:18 | watcher-notification/1-deploy-with-notification | running command: [sh -c set -euxo pipefail' - ' oc project watcher-kuttl-default' - ' APIPOD=$(oc get pods -n watcher-kuttl-default -l "service=watcher-api" -ocustom-columns=:metadata.name|grep -v ^$|head -1)' - ' if [ -n "${APIPOD}" ]; then' - ' [ $(echo $(oc rsh -c watcher-api ${APIPOD} cat /etc/watcher/watcher.conf.d/00-default.conf) |grep -czPo ''\[oslo_messaging_notifications\]\s+driver\s+=\s+messagingv2\s+transport_url\s+='') == 1 ]' - ' else' - ' exit 1' - ' fi' - ' ]' - ' logger.go:42: 06:59:18 | watcher-notification/1-deploy-with-notification | + oc project watcher-kuttl-default' - ' logger.go:42: 06:59:18 | watcher-notification/1-deploy-with-notification | Already on project "watcher-kuttl-default" on server "https://api.crc.testing:6443".' - ' logger.go:42: 06:59:18 | watcher-notification/1-deploy-with-notification | ++ oc get pods -n watcher-kuttl-default -l service=watcher-api -ocustom-columns=:metadata.name' - ' logger.go:42: 06:59:18 | watcher-notification/1-deploy-with-notification | ++ head -1' - ' logger.go:42: 06:59:18 | watcher-notification/1-deploy-with-notification | ++ grep -v ''^$''' - ' logger.go:42: 06:59:18 | watcher-notification/1-deploy-with-notification | + APIPOD=watcher-kuttl-api-0' - ' logger.go:42: 06:59:18 | watcher-notification/1-deploy-with-notification | + ''['' -n watcher-kuttl-api-0 '']''' - ' logger.go:42: 06:59:18 | watcher-notification/1-deploy-with-notification | ++ grep -czPo ''\[oslo_messaging_notifications\]\s+driver\s+=\s+messagingv2\s+transport_url\s+=''' - ' logger.go:42: 06:59:18 | watcher-notification/1-deploy-with-notification | +++ oc rsh -c watcher-api watcher-kuttl-api-0 cat /etc/watcher/watcher.conf.d/00-default.conf' - ' logger.go:42: 06:59:19 | watcher-notification/1-deploy-with-notification | ++ echo ''[DEFAULT]'' state_path = /var/lib/watcher transport_url = ''rabbit://**********=1'' control_exchange = watcher debug = True log_file = /var/log/watcher/watcher-kuttl-api.log ''#'' empty notification_level means that no notification will be sent notification_level = ''[database]'' connection = ''mysql+pymysql://watcher_49c6:586030fce1b9af1d5684354c0bd591b5@openstack.watcher-kuttl-default.svc/watcher?read_default_file=/etc/my.cnf'' ''[oslo_policy]'' policy_file = /etc/watcher/policy.yaml.sample ''[oslo_messaging_notifications]'' driver = messagingv2 transport_url = ''rabbit://**********=1'' ''[oslo_messaging_rabbit]'' rabbit_quorum_queue=true rabbit_transient_quorum_queue=true amqp_durable_queues=true ''[keystone_authtoken]'' memcached_servers=memcached-0.memcached.watcher-kuttl-default.svc:11212 project_domain_name = Default project_name = service user_domain_name = Default password = pa**********rd username = watcher auth_url = https://keystone-internal.watcher-kuttl-default.svc:5000 interface = internal auth_type = password cafile = /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem ''[watcher_clients_auth]'' project_domain_name = Default project_name = service user_domain_name = Default password = pa**********rd username = watcher auth_url = https://keystone-internal.watcher-kuttl-default.svc:5000 interface = internal auth_type = password cafile = /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem ''[oslo_concurrency]'' lock_path = /var/lib/watcher/tmp ''[watcher_datasources]'' datasources = prometheus ''[cache]'' backend = oslo_cache.memcache_pool memcache_servers=memcached-0.memcached.watcher-kuttl-default.svc:11212 memcache_socket_timeout = 0.5 memcache_pool_connection_get_timeout = 1 enabled=true tls_enabled=true memcache_dead_retry = 30 ''[prometheus_client]'' host = metric-storage-prometheus.watcher-kuttl-default.svc port = 9090 cafile = /etc/pki/ca-trust/extracted/pem/prometheus/ca.crt ''[cinder_client]'' endpoint_type = internal ''[glance_client]'' endpoint_type = internal ''[ironic_client]'' endpoint_type = internal ''[keystone_client]'' interface = internal ''[neutron_client]'' endpoint_type = internal ''[nova_client]'' endpoint_type = internal ''[placement_client]'' interface = internal ''[watcher_cluster_data_model_collectors.compute]'' period = 900 ''[watcher_cluster_data_model_collectors.baremetal]'' period = 900 ''[watcher_cluster_data_model_collectors.storage]'' period = 900' - ' logger.go:42: 06:59:19 | watcher-notification/1-deploy-with-notification | + ''['' 1 == 1 '']''' - ' logger.go:42: 06:59:20 | watcher-notification/1-deploy-with-notification | running command: [sh -c set -euxo pipefail' - ' oc project watcher-kuttl-default' - ' APIPOD=$(oc get pods -n watcher-kuttl-default -l "service=watcher-api" -ocustom-columns=:metadata.name|grep -v ^$|head -1)' - ' if [ -n "${APIPOD}" ]; then' - ' [ $(echo $(oc rsh -c watcher-api ${APIPOD} cat /etc/watcher/watcher.conf.d/00-default.conf) |grep -czPo ''\[oslo_messaging_notifications\]\s+driver\s+=\s+messagingv2\s+transport_url\s+='') == 1 ]' - ' else' - ' exit 1' - ' fi' - ' ]' - ' logger.go:42: 06:59:20 | watcher-notification/1-deploy-with-notification | + oc project watcher-kuttl-default' - ' logger.go:42: 06:59:20 | watcher-notification/1-deploy-with-notification | Already on project "watcher-kuttl-default" on server "https://api.crc.testing:6443".' - ' logger.go:42: 06:59:20 | watcher-notification/1-deploy-with-notification | ++ oc get pods -n watcher-kuttl-default -l service=watcher-api -ocustom-columns=:metadata.name' - ' logger.go:42: 06:59:20 | watcher-notification/1-deploy-with-notification | ++ head -1' - ' logger.go:42: 06:59:20 | watcher-notification/1-deploy-with-notification | ++ grep -v ''^$''' - ' logger.go:42: 06:59:20 | watcher-notification/1-deploy-with-notification | + APIPOD=watcher-kuttl-api-0' - ' logger.go:42: 06:59:20 | watcher-notification/1-deploy-with-notification | + ''['' -n watcher-kuttl-api-0 '']''' - ' logger.go:42: 06:59:20 | watcher-notification/1-deploy-with-notification | ++ grep -czPo ''\[oslo_messaging_notifications\]\s+driver\s+=\s+messagingv2\s+transport_url\s+=''' - ' logger.go:42: 06:59:20 | watcher-notification/1-deploy-with-notification | +++ oc rsh -c watcher-api watcher-kuttl-api-0 cat /etc/watcher/watcher.conf.d/00-default.conf' - ' logger.go:42: 06:59:20 | watcher-notification/1-deploy-with-notification | ++ echo ''[DEFAULT]'' state_path = /var/lib/watcher transport_url = ''rabbit://**********=1'' control_exchange = watcher debug = True log_file = /var/log/watcher/watcher-kuttl-api.log ''#'' empty notification_level means that no notification will be sent notification_level = ''[database]'' connection = ''mysql+pymysql://watcher_49c6:586030fce1b9af1d5684354c0bd591b5@openstack.watcher-kuttl-default.svc/watcher?read_default_file=/etc/my.cnf'' ''[oslo_policy]'' policy_file = /etc/watcher/policy.yaml.sample ''[oslo_messaging_notifications]'' driver = messagingv2 transport_url = ''rabbit://**********=1'' ''[oslo_messaging_rabbit]'' rabbit_quorum_queue=true rabbit_transient_quorum_queue=true amqp_durable_queues=true ''[keystone_authtoken]'' memcached_servers=memcached-0.memcached.watcher-kuttl-default.svc:11212 project_domain_name = Default project_name = service user_domain_name = Default password = pa**********rd username = watcher auth_url = https://keystone-internal.watcher-kuttl-default.svc:5000 interface = internal auth_type = password cafile = /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem ''[watcher_clients_auth]'' project_domain_name = Default project_name = service user_domain_name = Default password = pa**********rd username = watcher auth_url = https://keystone-internal.watcher-kuttl-default.svc:5000 interface = internal auth_type = password cafile = /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem ''[oslo_concurrency]'' lock_path = /var/lib/watcher/tmp ''[watcher_datasources]'' datasources = prometheus ''[cache]'' backend = oslo_cache.memcache_pool memcache_servers=memcached-0.memcached.watcher-kuttl-default.svc:11212 memcache_socket_timeout = 0.5 memcache_pool_connection_get_timeout = 1 enabled=true tls_enabled=true memcache_dead_retry = 30 ''[prometheus_client]'' host = metric-storage-prometheus.watcher-kuttl-default.svc port = 9090 cafile = /etc/pki/ca-trust/extracted/pem/prometheus/ca.crt ''[cinder_client]'' endpoint_type = internal ''[glance_client]'' endpoint_type = internal ''[ironic_client]'' endpoint_type = internal ''[keystone_client]'' interface = internal ''[neutron_client]'' endpoint_type = internal ''[nova_client]'' endpoint_type = internal ''[placement_client]'' interface = internal ''[watcher_cluster_data_model_collectors.compute]'' period = 900 ''[watcher_cluster_data_model_collectors.baremetal]'' period = 900 ''[watcher_cluster_data_model_collectors.storage]'' period = 900' - ' logger.go:42: 06:59:20 | watcher-notification/1-deploy-with-notification | + ''['' 1 == 1 '']''' - ' logger.go:42: 06:59:22 | watcher-notification/1-deploy-with-notification | running command: [sh -c set -euxo pipefail' - ' oc project watcher-kuttl-default' - ' APIPOD=$(oc get pods -n watcher-kuttl-default -l "service=watcher-api" -ocustom-columns=:metadata.name|grep -v ^$|head -1)' - ' if [ -n "${APIPOD}" ]; then' - ' [ $(echo $(oc rsh -c watcher-api ${APIPOD} cat /etc/watcher/watcher.conf.d/00-default.conf) |grep -czPo ''\[oslo_messaging_notifications\]\s+driver\s+=\s+messagingv2\s+transport_url\s+='') == 1 ]' - ' else' - ' exit 1' - ' fi' - ' ]' - ' logger.go:42: 06:59:22 | watcher-notification/1-deploy-with-notification | + oc project watcher-kuttl-default' - ' logger.go:42: 06:59:22 | watcher-notification/1-deploy-with-notification | Already on project "watcher-kuttl-default" on server "https://api.crc.testing:6443".' - ' logger.go:42: 06:59:22 | watcher-notification/1-deploy-with-notification | ++ oc get pods -n watcher-kuttl-default -l service=watcher-api -ocustom-columns=:metadata.name' - ' logger.go:42: 06:59:22 | watcher-notification/1-deploy-with-notification | ++ head -1' - ' logger.go:42: 06:59:22 | watcher-notification/1-deploy-with-notification | ++ grep -v ''^$''' - ' logger.go:42: 06:59:22 | watcher-notification/1-deploy-with-notification | + APIPOD=watcher-kuttl-api-0' - ' logger.go:42: 06:59:22 | watcher-notification/1-deploy-with-notification | + ''['' -n watcher-kuttl-api-0 '']''' - ' logger.go:42: 06:59:22 | watcher-notification/1-deploy-with-notification | +++ oc rsh -c watcher-api watcher-kuttl-api-0 cat /etc/watcher/watcher.conf.d/00-default.conf' - ' logger.go:42: 06:59:22 | watcher-notification/1-deploy-with-notification | ++ grep -czPo ''\[oslo_messaging_notifications\]\s+driver\s+=\s+messagingv2\s+transport_url\s+=''' - ' logger.go:42: 06:59:22 | watcher-notification/1-deploy-with-notification | ++ echo ''[DEFAULT]'' state_path = /var/lib/watcher transport_url = ''rabbit://**********=1'' control_exchange = watcher debug = True log_file = /var/log/watcher/watcher-kuttl-api.log ''#'' empty notification_level means that no notification will be sent notification_level = ''[database]'' connection = ''mysql+pymysql://watcher_49c6:586030fce1b9af1d5684354c0bd591b5@openstack.watcher-kuttl-default.svc/watcher?read_default_file=/etc/my.cnf'' ''[oslo_policy]'' policy_file = /etc/watcher/policy.yaml.sample ''[oslo_messaging_notifications]'' driver = messagingv2 transport_url = ''rabbit://**********=1'' ''[oslo_messaging_rabbit]'' rabbit_quorum_queue=true rabbit_transient_quorum_queue=true amqp_durable_queues=true ''[keystone_authtoken]'' memcached_servers=memcached-0.memcached.watcher-kuttl-default.svc:11212 project_domain_name = Default project_name = service user_domain_name = Default password = pa**********rd username = watcher auth_url = https://keystone-internal.watcher-kuttl-default.svc:5000 interface = internal auth_type = password cafile = /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem ''[watcher_clients_auth]'' project_domain_name = Default project_name = service user_domain_name = Default password = pa**********rd username = watcher auth_url = https://keystone-internal.watcher-kuttl-default.svc:5000 interface = internal auth_type = password cafile = /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem ''[oslo_concurrency]'' lock_path = /var/lib/watcher/tmp ''[watcher_datasources]'' datasources = prometheus ''[cache]'' backend = oslo_cache.memcache_pool memcache_servers=memcached-0.memcached.watcher-kuttl-default.svc:11212 memcache_socket_timeout = 0.5 memcache_pool_connection_get_timeout = 1 enabled=true tls_enabled=true memcache_dead_retry = 30 ''[prometheus_client]'' host = metric-storage-prometheus.watcher-kuttl-default.svc port = 9090 cafile = /etc/pki/ca-trust/extracted/pem/prometheus/ca.crt ''[cinder_client]'' endpoint_type = internal ''[glance_client]'' endpoint_type = internal ''[ironic_client]'' endpoint_type = internal ''[keystone_client]'' interface = internal ''[neutron_client]'' endpoint_type = internal ''[nova_client]'' endpoint_type = internal ''[placement_client]'' interface = internal ''[watcher_cluster_data_model_collectors.compute]'' period = 900 ''[watcher_cluster_data_model_collectors.baremetal]'' period = 900 ''[watcher_cluster_data_model_collectors.storage]'' period = 900' - ' logger.go:42: 06:59:22 | watcher-notification/1-deploy-with-notification | + ''['' 1 == 1 '']''' - ' logger.go:42: 06:59:22 | watcher-notification/1-deploy-with-notification | test step completed 1-deploy-with-notification' - ' logger.go:42: 06:59:22 | watcher-notification/2-cleanup-watcher | starting test step 2-cleanup-watcher' - ' logger.go:42: 06:59:33 | watcher-notification/2-cleanup-watcher | test step completed 2-cleanup-watcher' - ' logger.go:42: 06:59:33 | watcher-notification | skipping kubernetes event logging' - === CONT kuttl/harness/watcher-topology - ' logger.go:42: 06:59:33 | watcher-topology | Skipping creation of user-supplied namespace: watcher-kuttl-default' - ' logger.go:42: 06:59:33 | watcher-topology/0-cleanup-watcher | starting test step 0-cleanup-watcher' - ' logger.go:42: 06:59:33 | watcher-topology/0-cleanup-watcher | test step completed 0-cleanup-watcher' - ' logger.go:42: 06:59:33 | watcher-topology/1-deploy-with-topology | starting test step 1-deploy-with-topology' - ' logger.go:42: 06:59:33 | watcher-topology/1-deploy-with-topology | Topology:watcher-kuttl-default/watcher-api created' - ' logger.go:42: 06:59:33 | watcher-topology/1-deploy-with-topology | Watcher:watcher-kuttl-default/watcher-kuttl created' - ' logger.go:42: 06:59:59 | watcher-topology/1-deploy-with-topology | test step completed 1-deploy-with-topology' - ' logger.go:42: 06:59:59 | watcher-topology/2-cleanup-watcher | starting test step 2-cleanup-watcher' - ' logger.go:42: 06:59:59 | watcher-topology/2-cleanup-watcher | running command: [sh -c set -ex' - ' oc exec -n watcher-kuttl-default openstackclient -- openstack service list -f value -c Name -c Type | [ $(grep -c ^watcher) == 0 ]' - ' ]' - ' logger.go:42: 06:59:59 | watcher-topology/2-cleanup-watcher | + oc exec -n watcher-kuttl-default openstackclient -- openstack service list -f value -c Name -c Type' - ' logger.go:42: 06:59:59 | watcher-topology/2-cleanup-watcher | ++ grep -c ''^watcher''' - ' logger.go:42: 07:00:03 | watcher-topology/2-cleanup-watcher | + ''['' 0 == 0 '']''' - ' logger.go:42: 07:00:04 | watcher-topology/2-cleanup-watcher | running command: [sh -c set -ex' - ' oc exec -n watcher-kuttl-default openstackclient -- openstack service list -f value -c Name -c Type | [ $(grep -c ^watcher) == 0 ]' - ' ]' - ' logger.go:42: 07:00:04 | watcher-topology/2-cleanup-watcher | + oc exec -n watcher-kuttl-default openstackclient -- openstack service list -f value -c Name -c Type' - ' logger.go:42: 07:00:04 | watcher-topology/2-cleanup-watcher | ++ grep -c ''^watcher''' - ' logger.go:42: 07:00:07 | watcher-topology/2-cleanup-watcher | + ''['' 0 == 0 '']''' - ' logger.go:42: 07:00:07 | watcher-topology/2-cleanup-watcher | test step completed 2-cleanup-watcher' - ' logger.go:42: 07:00:07 | watcher-topology | skipping kubernetes event logging' - === CONT kuttl/harness/watcher-tls-certs-change - ' logger.go:42: 07:00:07 | watcher-tls-certs-change | Skipping creation of user-supplied namespace: watcher-kuttl-default' - ' logger.go:42: 07:00:07 | watcher-tls-certs-change/0-cleanup-watcher | starting test step 0-cleanup-watcher' - ' logger.go:42: 07:00:07 | watcher-tls-certs-change/0-cleanup-watcher | test step completed 0-cleanup-watcher' - ' logger.go:42: 07:00:07 | watcher-tls-certs-change/1-deploy-with-tlse | starting test step 1-deploy-with-tlse' - ' logger.go:42: 07:00:07 | watcher-tls-certs-change/1-deploy-with-tlse | Certificate:watcher-kuttl-default/watcher-internal-svc created' - ' logger.go:42: 07:00:07 | watcher-tls-certs-change/1-deploy-with-tlse | Certificate:watcher-kuttl-default/watcher-public-svc created' - ' logger.go:42: 07:00:07 | watcher-tls-certs-change/1-deploy-with-tlse | Watcher:watcher-kuttl-default/watcher-kuttl created' - ' logger.go:42: 07:00:07 | watcher-tls-certs-change/1-deploy-with-tlse | running command: [sh -c set -euxo pipefail' - ' public_svc_cert=$(oc rsh -n $NAMESPACE -c watcher-api watcher-kuttl-api-0 cat /etc/pki/tls/certs/public.crt)' - ' public_secret_cert=$(oc get -n $NAMESPACE secret cert-watcher-public-svc -o jsonpath=''{.data.tls\.crt}'' | base64 --decode)' - ' # ensure that the svc secret and cert secret match' - ' if [ "${public_svc_cert}" != "${public_secret_cert}" ]; then' - ' exit 1' - ' fi' - ' ' - ' internal_svc_cert=$(oc rsh -n $NAMESPACE -c watcher-api watcher-kuttl-api-0 cat /etc/pki/tls/certs/internal.crt)' - ' internal_secret_cert=$(oc get -n $NAMESPACE secret cert-watcher-internal-svc -o jsonpath=''{.data.tls\.crt}'' | base64 --decode)' - ' # ensure that the svc secret and cert secret match' - ' if [ "${internal_svc_cert}" != "${internal_secret_cert}" ]; then' - ' exit 1' - ' fi' - ' ]' - ' logger.go:42: 07:00:07 | watcher-tls-certs-change/1-deploy-with-tlse | ++ oc rsh -n watcher-kuttl-default -c watcher-api watcher-kuttl-api-0 cat /etc/pki/tls/certs/public.crt' - ' logger.go:42: 07:00:07 | watcher-tls-certs-change/1-deploy-with-tlse | Error from server (NotFound): pods "watcher-kuttl-api-0" not found' - ' logger.go:42: 07:00:07 | watcher-tls-certs-change/1-deploy-with-tlse | + public_svc_cert=' - ' logger.go:42: 07:00:08 | watcher-tls-certs-change/1-deploy-with-tlse | running command: [sh -c set -euxo pipefail' - ' public_svc_cert=$(oc rsh -n $NAMESPACE -c watcher-api watcher-kuttl-api-0 cat /etc/pki/tls/certs/public.crt)' - ' public_secret_cert=$(oc get -n $NAMESPACE secret cert-watcher-public-svc -o jsonpath=''{.data.tls\.crt}'' | base64 --decode)' - ' # ensure that the svc secret and cert secret match' - ' if [ "${public_svc_cert}" != "${public_secret_cert}" ]; then' - ' exit 1' - ' fi' - ' ' - ' internal_svc_cert=$(oc rsh -n $NAMESPACE -c watcher-api watcher-kuttl-api-0 cat /etc/pki/tls/certs/internal.crt)' - ' internal_secret_cert=$(oc get -n $NAMESPACE secret cert-watcher-internal-svc -o jsonpath=''{.data.tls\.crt}'' | base64 --decode)' - ' # ensure that the svc secret and cert secret match' - ' if [ "${internal_svc_cert}" != "${internal_secret_cert}" ]; then' - ' exit 1' - ' fi' - ' ]' - ' logger.go:42: 07:00:08 | watcher-tls-certs-change/1-deploy-with-tlse | ++ oc rsh -n watcher-kuttl-default -c watcher-api watcher-kuttl-api-0 cat /etc/pki/tls/certs/public.crt' - ' logger.go:42: 07:00:08 | watcher-tls-certs-change/1-deploy-with-tlse | Error from server (NotFound): pods "watcher-kuttl-api-0" not found' - ' logger.go:42: 07:00:08 | watcher-tls-certs-change/1-deploy-with-tlse | + public_svc_cert=' - ' logger.go:42: 07:00:09 | watcher-tls-certs-change/1-deploy-with-tlse | running command: [sh -c set -euxo pipefail' - ' public_svc_cert=$(oc rsh -n $NAMESPACE -c watcher-api watcher-kuttl-api-0 cat /etc/pki/tls/certs/public.crt)' - ' public_secret_cert=$(oc get -n $NAMESPACE secret cert-watcher-public-svc -o jsonpath=''{.data.tls\.crt}'' | base64 --decode)' - ' # ensure that the svc secret and cert secret match' - ' if [ "${public_svc_cert}" != "${public_secret_cert}" ]; then' - ' exit 1' - ' fi' - ' ' - ' internal_svc_cert=$(oc rsh -n $NAMESPACE -c watcher-api watcher-kuttl-api-0 cat /etc/pki/tls/certs/internal.crt)' - ' internal_secret_cert=$(oc get -n $NAMESPACE secret cert-watcher-internal-svc -o jsonpath=''{.data.tls\.crt}'' | base64 --decode)' - ' # ensure that the svc secret and cert secret match' - ' if [ "${internal_svc_cert}" != "${internal_secret_cert}" ]; then' - ' exit 1' - ' fi' - ' ]' - ' logger.go:42: 07:00:09 | watcher-tls-certs-change/1-deploy-with-tlse | ++ oc rsh -n watcher-kuttl-default -c watcher-api watcher-kuttl-api-0 cat /etc/pki/tls/certs/public.crt' - ' logger.go:42: 07:00:10 | watcher-tls-certs-change/1-deploy-with-tlse | Error from server (NotFound): pods "watcher-kuttl-api-0" not found' - ' logger.go:42: 07:00:10 | watcher-tls-certs-change/1-deploy-with-tlse | + public_svc_cert=' - ' logger.go:42: 07:00:11 | watcher-tls-certs-change/1-deploy-with-tlse | running command: [sh -c set -euxo pipefail' - ' public_svc_cert=$(oc rsh -n $NAMESPACE -c watcher-api watcher-kuttl-api-0 cat /etc/pki/tls/certs/public.crt)' - ' public_secret_cert=$(oc get -n $NAMESPACE secret cert-watcher-public-svc -o jsonpath=''{.data.tls\.crt}'' | base64 --decode)' - ' # ensure that the svc secret and cert secret match' - ' if [ "${public_svc_cert}" != "${public_secret_cert}" ]; then' - ' exit 1' - ' fi' - ' ' - ' internal_svc_cert=$(oc rsh -n $NAMESPACE -c watcher-api watcher-kuttl-api-0 cat /etc/pki/tls/certs/internal.crt)' - ' internal_secret_cert=$(oc get -n $NAMESPACE secret cert-watcher-internal-svc -o jsonpath=''{.data.tls\.crt}'' | base64 --decode)' - ' # ensure that the svc secret and cert secret match' - ' if [ "${internal_svc_cert}" != "${internal_secret_cert}" ]; then' - ' exit 1' - ' fi' - ' ]' - ' logger.go:42: 07:00:11 | watcher-tls-certs-change/1-deploy-with-tlse | ++ oc rsh -n watcher-kuttl-default -c watcher-api watcher-kuttl-api-0 cat /etc/pki/tls/certs/public.crt' - ' logger.go:42: 07:00:11 | watcher-tls-certs-change/1-deploy-with-tlse | Error from server (NotFound): pods "watcher-kuttl-api-0" not found' - ' logger.go:42: 07:00:11 | watcher-tls-certs-change/1-deploy-with-tlse | + public_svc_cert=' - ' logger.go:42: 07:00:12 | watcher-tls-certs-change/1-deploy-with-tlse | running command: [sh -c set -euxo pipefail' - ' public_svc_cert=$(oc rsh -n $NAMESPACE -c watcher-api watcher-kuttl-api-0 cat /etc/pki/tls/certs/public.crt)' - ' public_secret_cert=$(oc get -n $NAMESPACE secret cert-watcher-public-svc -o jsonpath=''{.data.tls\.crt}'' | base64 --decode)' - ' # ensure that the svc secret and cert secret match' - ' if [ "${public_svc_cert}" != "${public_secret_cert}" ]; then' - ' exit 1' - ' fi' - ' ' - ' internal_svc_cert=$(oc rsh -n $NAMESPACE -c watcher-api watcher-kuttl-api-0 cat /etc/pki/tls/certs/internal.crt)' - ' internal_secret_cert=$(oc get -n $NAMESPACE secret cert-watcher-internal-svc -o jsonpath=''{.data.tls\.crt}'' | base64 --decode)' - ' # ensure that the svc secret and cert secret match' - ' if [ "${internal_svc_cert}" != "${internal_secret_cert}" ]; then' - ' exit 1' - ' fi' - ' ]' - ' logger.go:42: 07:00:12 | watcher-tls-certs-change/1-deploy-with-tlse | ++ oc rsh -n watcher-kuttl-default -c watcher-api watcher-kuttl-api-0 cat /etc/pki/tls/certs/public.crt' - ' logger.go:42: 07:00:12 | watcher-tls-certs-change/1-deploy-with-tlse | Error from server (NotFound): pods "watcher-kuttl-api-0" not found' - ' logger.go:42: 07:00:12 | watcher-tls-certs-change/1-deploy-with-tlse | + public_svc_cert=' - ' logger.go:42: 07:00:13 | watcher-tls-certs-change/1-deploy-with-tlse | running command: [sh -c set -euxo pipefail' - ' public_svc_cert=$(oc rsh -n $NAMESPACE -c watcher-api watcher-kuttl-api-0 cat /etc/pki/tls/certs/public.crt)' - ' public_secret_cert=$(oc get -n $NAMESPACE secret cert-watcher-public-svc -o jsonpath=''{.data.tls\.crt}'' | base64 --decode)' - ' # ensure that the svc secret and cert secret match' - ' if [ "${public_svc_cert}" != "${public_secret_cert}" ]; then' - ' exit 1' - ' fi' - ' ' - ' internal_svc_cert=$(oc rsh -n $NAMESPACE -c watcher-api watcher-kuttl-api-0 cat /etc/pki/tls/certs/internal.crt)' - ' internal_secret_cert=$(oc get -n $NAMESPACE secret cert-watcher-internal-svc -o jsonpath=''{.data.tls\.crt}'' | base64 --decode)' - ' # ensure that the svc secret and cert secret match' - ' if [ "${internal_svc_cert}" != "${internal_secret_cert}" ]; then' - ' exit 1' - ' fi' - ' ]' - ' logger.go:42: 07:00:13 | watcher-tls-certs-change/1-deploy-with-tlse | ++ oc rsh -n watcher-kuttl-default -c watcher-api watcher-kuttl-api-0 cat /etc/pki/tls/certs/public.crt' - ' logger.go:42: 07:00:13 | watcher-tls-certs-change/1-deploy-with-tlse | Error from server (NotFound): pods "watcher-kuttl-api-0" not found' - ' logger.go:42: 07:00:13 | watcher-tls-certs-change/1-deploy-with-tlse | + public_svc_cert=' - ' logger.go:42: 07:00:14 | watcher-tls-certs-change/1-deploy-with-tlse | running command: [sh -c set -euxo pipefail' - ' public_svc_cert=$(oc rsh -n $NAMESPACE -c watcher-api watcher-kuttl-api-0 cat /etc/pki/tls/certs/public.crt)' - ' public_secret_cert=$(oc get -n $NAMESPACE secret cert-watcher-public-svc -o jsonpath=''{.data.tls\.crt}'' | base64 --decode)' - ' # ensure that the svc secret and cert secret match' - ' if [ "${public_svc_cert}" != "${public_secret_cert}" ]; then' - ' exit 1' - ' fi' - ' ' - ' internal_svc_cert=$(oc rsh -n $NAMESPACE -c watcher-api watcher-kuttl-api-0 cat /etc/pki/tls/certs/internal.crt)' - ' internal_secret_cert=$(oc get -n $NAMESPACE secret cert-watcher-internal-svc -o jsonpath=''{.data.tls\.crt}'' | base64 --decode)' - ' # ensure that the svc secret and cert secret match' - ' if [ "${internal_svc_cert}" != "${internal_secret_cert}" ]; then' - ' exit 1' - ' fi' - ' ]' - ' logger.go:42: 07:00:14 | watcher-tls-certs-change/1-deploy-with-tlse | ++ oc rsh -n watcher-kuttl-default -c watcher-api watcher-kuttl-api-0 cat /etc/pki/tls/certs/public.crt' - ' logger.go:42: 07:00:15 | watcher-tls-certs-change/1-deploy-with-tlse | Error from server (NotFound): pods "watcher-kuttl-api-0" not found' - ' logger.go:42: 07:00:15 | watcher-tls-certs-change/1-deploy-with-tlse | + public_svc_cert=' - ' logger.go:42: 07:00:16 | watcher-tls-certs-change/1-deploy-with-tlse | running command: [sh -c set -euxo pipefail' - ' public_svc_cert=$(oc rsh -n $NAMESPACE -c watcher-api watcher-kuttl-api-0 cat /etc/pki/tls/certs/public.crt)' - ' public_secret_cert=$(oc get -n $NAMESPACE secret cert-watcher-public-svc -o jsonpath=''{.data.tls\.crt}'' | base64 --decode)' - ' # ensure that the svc secret and cert secret match' - ' if [ "${public_svc_cert}" != "${public_secret_cert}" ]; then' - ' exit 1' - ' fi' - ' ' - ' internal_svc_cert=$(oc rsh -n $NAMESPACE -c watcher-api watcher-kuttl-api-0 cat /etc/pki/tls/certs/internal.crt)' - ' internal_secret_cert=$(oc get -n $NAMESPACE secret cert-watcher-internal-svc -o jsonpath=''{.data.tls\.crt}'' | base64 --decode)' - ' # ensure that the svc secret and cert secret match' - ' if [ "${internal_svc_cert}" != "${internal_secret_cert}" ]; then' - ' exit 1' - ' fi' - ' ]' - ' logger.go:42: 07:00:16 | watcher-tls-certs-change/1-deploy-with-tlse | ++ oc rsh -n watcher-kuttl-default -c watcher-api watcher-kuttl-api-0 cat /etc/pki/tls/certs/public.crt' - ' logger.go:42: 07:00:16 | watcher-tls-certs-change/1-deploy-with-tlse | Error from server (NotFound): pods "watcher-kuttl-api-0" not found' - ' logger.go:42: 07:00:16 | watcher-tls-certs-change/1-deploy-with-tlse | + public_svc_cert=' - ' logger.go:42: 07:00:17 | watcher-tls-certs-change/1-deploy-with-tlse | running command: [sh -c set -euxo pipefail' - ' public_svc_cert=$(oc rsh -n $NAMESPACE -c watcher-api watcher-kuttl-api-0 cat /etc/pki/tls/certs/public.crt)' - ' public_secret_cert=$(oc get -n $NAMESPACE secret cert-watcher-public-svc -o jsonpath=''{.data.tls\.crt}'' | base64 --decode)' - ' # ensure that the svc secret and cert secret match' - ' if [ "${public_svc_cert}" != "${public_secret_cert}" ]; then' - ' exit 1' - ' fi' - ' ' - ' internal_svc_cert=$(oc rsh -n $NAMESPACE -c watcher-api watcher-kuttl-api-0 cat /etc/pki/tls/certs/internal.crt)' - ' internal_secret_cert=$(oc get -n $NAMESPACE secret cert-watcher-internal-svc -o jsonpath=''{.data.tls\.crt}'' | base64 --decode)' - ' # ensure that the svc secret and cert secret match' - ' if [ "${internal_svc_cert}" != "${internal_secret_cert}" ]; then' - ' exit 1' - ' fi' - ' ]' - ' logger.go:42: 07:00:17 | watcher-tls-certs-change/1-deploy-with-tlse | ++ oc rsh -n watcher-kuttl-default -c watcher-api watcher-kuttl-api-0 cat /etc/pki/tls/certs/public.crt' - ' logger.go:42: 07:00:17 | watcher-tls-certs-change/1-deploy-with-tlse | Error from server (NotFound): pods "watcher-kuttl-api-0" not found' - ' logger.go:42: 07:00:17 | watcher-tls-certs-change/1-deploy-with-tlse | + public_svc_cert=' - ' logger.go:42: 07:00:18 | watcher-tls-certs-change/1-deploy-with-tlse | running command: [sh -c set -euxo pipefail' - ' public_svc_cert=$(oc rsh -n $NAMESPACE -c watcher-api watcher-kuttl-api-0 cat /etc/pki/tls/certs/public.crt)' - ' public_secret_cert=$(oc get -n $NAMESPACE secret cert-watcher-public-svc -o jsonpath=''{.data.tls\.crt}'' | base64 --decode)' - ' # ensure that the svc secret and cert secret match' - ' if [ "${public_svc_cert}" != "${public_secret_cert}" ]; then' - ' exit 1' - ' fi' - ' ' - ' internal_svc_cert=$(oc rsh -n $NAMESPACE -c watcher-api watcher-kuttl-api-0 cat /etc/pki/tls/certs/internal.crt)' - ' internal_secret_cert=$(oc get -n $NAMESPACE secret cert-watcher-internal-svc -o jsonpath=''{.data.tls\.crt}'' | base64 --decode)' - ' # ensure that the svc secret and cert secret match' - ' if [ "${internal_svc_cert}" != "${internal_secret_cert}" ]; then' - ' exit 1' - ' fi' - ' ]' - ' logger.go:42: 07:00:18 | watcher-tls-certs-change/1-deploy-with-tlse | ++ oc rsh -n watcher-kuttl-default -c watcher-api watcher-kuttl-api-0 cat /etc/pki/tls/certs/public.crt' - ' logger.go:42: 07:00:18 | watcher-tls-certs-change/1-deploy-with-tlse | Error from server (NotFound): pods "watcher-kuttl-api-0" not found' - ' logger.go:42: 07:00:18 | watcher-tls-certs-change/1-deploy-with-tlse | + public_svc_cert=' - ' logger.go:42: 07:00:19 | watcher-tls-certs-change/1-deploy-with-tlse | running command: [sh -c set -euxo pipefail' - ' public_svc_cert=$(oc rsh -n $NAMESPACE -c watcher-api watcher-kuttl-api-0 cat /etc/pki/tls/certs/public.crt)' - ' public_secret_cert=$(oc get -n $NAMESPACE secret cert-watcher-public-svc -o jsonpath=''{.data.tls\.crt}'' | base64 --decode)' - ' # ensure that the svc secret and cert secret match' - ' if [ "${public_svc_cert}" != "${public_secret_cert}" ]; then' - ' exit 1' - ' fi' - ' ' - ' internal_svc_cert=$(oc rsh -n $NAMESPACE -c watcher-api watcher-kuttl-api-0 cat /etc/pki/tls/certs/internal.crt)' - ' internal_secret_cert=$(oc get -n $NAMESPACE secret cert-watcher-internal-svc -o jsonpath=''{.data.tls\.crt}'' | base64 --decode)' - ' # ensure that the svc secret and cert secret match' - ' if [ "${internal_svc_cert}" != "${internal_secret_cert}" ]; then' - ' exit 1' - ' fi' - ' ]' - ' logger.go:42: 07:00:19 | watcher-tls-certs-change/1-deploy-with-tlse | ++ oc rsh -n watcher-kuttl-default -c watcher-api watcher-kuttl-api-0 cat /etc/pki/tls/certs/public.crt' - ' logger.go:42: 07:00:20 | watcher-tls-certs-change/1-deploy-with-tlse | Error from server (NotFound): pods "watcher-kuttl-api-0" not found' - ' logger.go:42: 07:00:20 | watcher-tls-certs-change/1-deploy-with-tlse | + public_svc_cert=' - ' logger.go:42: 07:00:21 | watcher-tls-certs-change/1-deploy-with-tlse | running command: [sh -c set -euxo pipefail' - ' public_svc_cert=$(oc rsh -n $NAMESPACE -c watcher-api watcher-kuttl-api-0 cat /etc/pki/tls/certs/public.crt)' - ' public_secret_cert=$(oc get -n $NAMESPACE secret cert-watcher-public-svc -o jsonpath=''{.data.tls\.crt}'' | base64 --decode)' - ' # ensure that the svc secret and cert secret match' - ' if [ "${public_svc_cert}" != "${public_secret_cert}" ]; then' - ' exit 1' - ' fi' - ' ' - ' internal_svc_cert=$(oc rsh -n $NAMESPACE -c watcher-api watcher-kuttl-api-0 cat /etc/pki/tls/certs/internal.crt)' - ' internal_secret_cert=$(oc get -n $NAMESPACE secret cert-watcher-internal-svc -o jsonpath=''{.data.tls\.crt}'' | base64 --decode)' - ' # ensure that the svc secret and cert secret match' - ' if [ "${internal_svc_cert}" != "${internal_secret_cert}" ]; then' - ' exit 1' - ' fi' - ' ]' - ' logger.go:42: 07:00:21 | watcher-tls-certs-change/1-deploy-with-tlse | ++ oc rsh -n watcher-kuttl-default -c watcher-api watcher-kuttl-api-0 cat /etc/pki/tls/certs/public.crt' - ' logger.go:42: 07:00:21 | watcher-tls-certs-change/1-deploy-with-tlse | error: unable to upgrade connection: container not found ("watcher-api")' - ' logger.go:42: 07:00:21 | watcher-tls-certs-change/1-deploy-with-tlse | + public_svc_cert=' - ' logger.go:42: 07:00:22 | watcher-tls-certs-change/1-deploy-with-tlse | running command: [sh -c set -euxo pipefail' - ' public_svc_cert=$(oc rsh -n $NAMESPACE -c watcher-api watcher-kuttl-api-0 cat /etc/pki/tls/certs/public.crt)' - ' public_secret_cert=$(oc get -n $NAMESPACE secret cert-watcher-public-svc -o jsonpath=''{.data.tls\.crt}'' | base64 --decode)' - ' # ensure that the svc secret and cert secret match' - ' if [ "${public_svc_cert}" != "${public_secret_cert}" ]; then' - ' exit 1' - ' fi' - ' ' - ' internal_svc_cert=$(oc rsh -n $NAMESPACE -c watcher-api watcher-kuttl-api-0 cat /etc/pki/tls/certs/internal.crt)' - ' internal_secret_cert=$(oc get -n $NAMESPACE secret cert-watcher-internal-svc -o jsonpath=''{.data.tls\.crt}'' | base64 --decode)' - ' # ensure that the svc secret and cert secret match' - ' if [ "${internal_svc_cert}" != "${internal_secret_cert}" ]; then' - ' exit 1' - ' fi' - ' ]' - ' logger.go:42: 07:00:22 | watcher-tls-certs-change/1-deploy-with-tlse | ++ oc rsh -n watcher-kuttl-default -c watcher-api watcher-kuttl-api-0 cat /etc/pki/tls/certs/public.crt' - ' logger.go:42: 07:00:22 | watcher-tls-certs-change/1-deploy-with-tlse | + public_svc_cert=''-----BEGIN CERTIFICATE-----' - ' logger.go:42: 07:00:22 | watcher-tls-certs-change/1-deploy-with-tlse | MIIDsjCCAhqgAwIBAgIQFVNmGwlW4BZq0NMjqP97cTANBgkqhkiG9w0BAQwFADAY' - ' logger.go:42: 07:00:22 | watcher-tls-certs-change/1-deploy-with-tlse | MRYwFAYDVQQDEw1yb290Y2EtcHVibGljMB4XDTI2MDEyMjA3MDAwN1oXDTMxMDEy' - ' logger.go:42: 07:00:22 | watcher-tls-certs-change/1-deploy-with-tlse | MTA3MDAwN1owADCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAMpNXlNH' - ' logger.go:42: 07:00:22 | watcher-tls-certs-change/1-deploy-with-tlse | +PgDkPuiPtY2Iy+iOVTneCp31gtcOAI8JIp8fgCvk7KwdimyxIUQ2cYxsZPnG0cy' - ' logger.go:42: 07:00:22 | watcher-tls-certs-change/1-deploy-with-tlse | wL4XpOQoqp0ElzP3DPP1FUOAnx7ctKdash5DUzBT+84oA2Hab8yd3D9RWVXgYTYP' - ' logger.go:42: 07:00:22 | watcher-tls-certs-change/1-deploy-with-tlse | tgSvfD60T1dJeiiJa5kBj0Gfj+0gKtYNueYrOo65hlC6Smd2tP/7av+xLtQ2b1JS' - ' logger.go:42: 07:00:22 | watcher-tls-certs-change/1-deploy-with-tlse | tFAWNznSY+2n+8SyIH6jjDb/oBYfANPOIcfU/4X8dTGzLgkpMMXK3KyXfzgD9EW+' - ' logger.go:42: 07:00:22 | watcher-tls-certs-change/1-deploy-with-tlse | HexV+da9g5Ix/I5gxku5bd0DWz/ovW+1o/uYMgB5ss9oqd8YKKSqkptXfAtoGToA' - ' logger.go:42: 07:00:22 | watcher-tls-certs-change/1-deploy-with-tlse | KG8QccenVwR7kEcCAwEAAaOBjzCBjDAOBgNVHQ8BAf8EBAMCBaAwEwYDVR0lBAww' - ' logger.go:42: 07:00:22 | watcher-tls-certs-change/1-deploy-with-tlse | CgYIKwYBBQUHAwEwDAYDVR0TAQH/BAIwADAfBgNVHSMEGDAWgBSy3xzAKuvmrfpX' - ' logger.go:42: 07:00:22 | watcher-tls-certs-change/1-deploy-with-tlse | EgmGJVm/ieLg8TA2BgNVHREBAf8ELDAqgih3YXRjaGVyLXB1YmxpYy53YXRjaGVy' - ' logger.go:42: 07:00:22 | watcher-tls-certs-change/1-deploy-with-tlse | LWt1dHRsLWRlZmF1bHQuc3ZjMA0GCSqGSIb3DQEBDAUAA4IBgQBmitaOVryjB1Fy' - ' logger.go:42: 07:00:22 | watcher-tls-certs-change/1-deploy-with-tlse | o+uQ7q1CfFEXu4uk57VffzLi6GFAot2u9gqfKm5Bx/JFKe0HvUHxfyum8oGhg8Mm' - ' logger.go:42: 07:00:22 | watcher-tls-certs-change/1-deploy-with-tlse | lF7bosLeepyWZeFYbXLZ3eCDrTyQRPPc8j6O8R9Ygg2wvIhwIrADK1/KTQuPVFW5' - ' logger.go:42: 07:00:22 | watcher-tls-certs-change/1-deploy-with-tlse | DVaC7MERErbmczp6DYiELdgcqlJimK3Vp/11qNtk1Kc7wVLI2Y36CzhwVMXOBW00' - ' logger.go:42: 07:00:22 | watcher-tls-certs-change/1-deploy-with-tlse | XXQxvYoWLH8WjhVMN9L0ZfTLdQIwL4gt0B3R7Uh8HR/qMCwW4RhIByG/el6wH1yh' - ' logger.go:42: 07:00:22 | watcher-tls-certs-change/1-deploy-with-tlse | D5gkY2Jkbf1vvmmw6MBLkx5O8aL145sW8fDKMGrmRj+eO5ALLeYMdmQZXqwZ4URw' - ' logger.go:42: 07:00:22 | watcher-tls-certs-change/1-deploy-with-tlse | X1tyjzN1ntrcBJf6xnuMLfT1v9RPMCT4CPeb23M5lqvwQXdrbiarkLQ9n/mBPUA6' - ' logger.go:42: 07:00:22 | watcher-tls-certs-change/1-deploy-with-tlse | lUYWBkFqYg9XnAgkWTeehQ7+AeOtvhj6gu3st6nEkOjJs9ri3Jris8kGL9Ay5yMy' - ' logger.go:42: 07:00:22 | watcher-tls-certs-change/1-deploy-with-tlse | af6ByIVzqJuVxr9GJiFIvhx+t2NEelQUiFnN1465Q67x/AELrfo=' - ' logger.go:42: 07:00:22 | watcher-tls-certs-change/1-deploy-with-tlse | -----END CERTIFICATE-----''' - ' logger.go:42: 07:00:22 | watcher-tls-certs-change/1-deploy-with-tlse | ++ oc get -n watcher-kuttl-default secret cert-watcher-public-svc -o ''jsonpath={.data.tls\.crt}''' - ' logger.go:42: 07:00:22 | watcher-tls-certs-change/1-deploy-with-tlse | ++ base64 --decode' - ' logger.go:42: 07:00:23 | watcher-tls-certs-change/1-deploy-with-tlse | + public_secret_cert=''-----BEGIN CERTIFICATE-----' - ' logger.go:42: 07:00:23 | watcher-tls-certs-change/1-deploy-with-tlse | MIIDsjCCAhqgAwIBAgIQFVNmGwlW4BZq0NMjqP97cTANBgkqhkiG9w0BAQwFADAY' - ' logger.go:42: 07:00:23 | watcher-tls-certs-change/1-deploy-with-tlse | MRYwFAYDVQQDEw1yb290Y2EtcHVibGljMB4XDTI2MDEyMjA3MDAwN1oXDTMxMDEy' - ' logger.go:42: 07:00:23 | watcher-tls-certs-change/1-deploy-with-tlse | MTA3MDAwN1owADCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAMpNXlNH' - ' logger.go:42: 07:00:23 | watcher-tls-certs-change/1-deploy-with-tlse | +PgDkPuiPtY2Iy+iOVTneCp31gtcOAI8JIp8fgCvk7KwdimyxIUQ2cYxsZPnG0cy' - ' logger.go:42: 07:00:23 | watcher-tls-certs-change/1-deploy-with-tlse | wL4XpOQoqp0ElzP3DPP1FUOAnx7ctKdash5DUzBT+84oA2Hab8yd3D9RWVXgYTYP' - ' logger.go:42: 07:00:23 | watcher-tls-certs-change/1-deploy-with-tlse | tgSvfD60T1dJeiiJa5kBj0Gfj+0gKtYNueYrOo65hlC6Smd2tP/7av+xLtQ2b1JS' - ' logger.go:42: 07:00:23 | watcher-tls-certs-change/1-deploy-with-tlse | tFAWNznSY+2n+8SyIH6jjDb/oBYfANPOIcfU/4X8dTGzLgkpMMXK3KyXfzgD9EW+' - ' logger.go:42: 07:00:23 | watcher-tls-certs-change/1-deploy-with-tlse | HexV+da9g5Ix/I5gxku5bd0DWz/ovW+1o/uYMgB5ss9oqd8YKKSqkptXfAtoGToA' - ' logger.go:42: 07:00:23 | watcher-tls-certs-change/1-deploy-with-tlse | KG8QccenVwR7kEcCAwEAAaOBjzCBjDAOBgNVHQ8BAf8EBAMCBaAwEwYDVR0lBAww' - ' logger.go:42: 07:00:23 | watcher-tls-certs-change/1-deploy-with-tlse | CgYIKwYBBQUHAwEwDAYDVR0TAQH/BAIwADAfBgNVHSMEGDAWgBSy3xzAKuvmrfpX' - ' logger.go:42: 07:00:23 | watcher-tls-certs-change/1-deploy-with-tlse | EgmGJVm/ieLg8TA2BgNVHREBAf8ELDAqgih3YXRjaGVyLXB1YmxpYy53YXRjaGVy' - ' logger.go:42: 07:00:23 | watcher-tls-certs-change/1-deploy-with-tlse | LWt1dHRsLWRlZmF1bHQuc3ZjMA0GCSqGSIb3DQEBDAUAA4IBgQBmitaOVryjB1Fy' - ' logger.go:42: 07:00:23 | watcher-tls-certs-change/1-deploy-with-tlse | o+uQ7q1CfFEXu4uk57VffzLi6GFAot2u9gqfKm5Bx/JFKe0HvUHxfyum8oGhg8Mm' - ' logger.go:42: 07:00:23 | watcher-tls-certs-change/1-deploy-with-tlse | lF7bosLeepyWZeFYbXLZ3eCDrTyQRPPc8j6O8R9Ygg2wvIhwIrADK1/KTQuPVFW5' - ' logger.go:42: 07:00:23 | watcher-tls-certs-change/1-deploy-with-tlse | DVaC7MERErbmczp6DYiELdgcqlJimK3Vp/11qNtk1Kc7wVLI2Y36CzhwVMXOBW00' - ' logger.go:42: 07:00:23 | watcher-tls-certs-change/1-deploy-with-tlse | XXQxvYoWLH8WjhVMN9L0ZfTLdQIwL4gt0B3R7Uh8HR/qMCwW4RhIByG/el6wH1yh' - ' logger.go:42: 07:00:23 | watcher-tls-certs-change/1-deploy-with-tlse | D5gkY2Jkbf1vvmmw6MBLkx5O8aL145sW8fDKMGrmRj+eO5ALLeYMdmQZXqwZ4URw' - ' logger.go:42: 07:00:23 | watcher-tls-certs-change/1-deploy-with-tlse | X1tyjzN1ntrcBJf6xnuMLfT1v9RPMCT4CPeb23M5lqvwQXdrbiarkLQ9n/mBPUA6' - ' logger.go:42: 07:00:23 | watcher-tls-certs-change/1-deploy-with-tlse | lUYWBkFqYg9XnAgkWTeehQ7+AeOtvhj6gu3st6nEkOjJs9ri3Jris8kGL9Ay5yMy' - ' logger.go:42: 07:00:23 | watcher-tls-certs-change/1-deploy-with-tlse | af6ByIVzqJuVxr9GJiFIvhx+t2NEelQUiFnN1465Q67x/AELrfo=' - ' logger.go:42: 07:00:23 | watcher-tls-certs-change/1-deploy-with-tlse | -----END CERTIFICATE-----''' - ' logger.go:42: 07:00:23 | watcher-tls-certs-change/1-deploy-with-tlse | + ''['' ''-----BEGIN CERTIFICATE-----' - ' logger.go:42: 07:00:23 | watcher-tls-certs-change/1-deploy-with-tlse | MIIDsjCCAhqgAwIBAgIQFVNmGwlW4BZq0NMjqP97cTANBgkqhkiG9w0BAQwFADAY' - ' logger.go:42: 07:00:23 | watcher-tls-certs-change/1-deploy-with-tlse | MRYwFAYDVQQDEw1yb290Y2EtcHVibGljMB4XDTI2MDEyMjA3MDAwN1oXDTMxMDEy' - ' logger.go:42: 07:00:23 | watcher-tls-certs-change/1-deploy-with-tlse | MTA3MDAwN1owADCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAMpNXlNH' - ' logger.go:42: 07:00:23 | watcher-tls-certs-change/1-deploy-with-tlse | +PgDkPuiPtY2Iy+iOVTneCp31gtcOAI8JIp8fgCvk7KwdimyxIUQ2cYxsZPnG0cy' - ' logger.go:42: 07:00:23 | watcher-tls-certs-change/1-deploy-with-tlse | wL4XpOQoqp0ElzP3DPP1FUOAnx7ctKdash5DUzBT+84oA2Hab8yd3D9RWVXgYTYP' - ' logger.go:42: 07:00:23 | watcher-tls-certs-change/1-deploy-with-tlse | tgSvfD60T1dJeiiJa5kBj0Gfj+0gKtYNueYrOo65hlC6Smd2tP/7av+xLtQ2b1JS' - ' logger.go:42: 07:00:23 | watcher-tls-certs-change/1-deploy-with-tlse | tFAWNznSY+2n+8SyIH6jjDb/oBYfANPOIcfU/4X8dTGzLgkpMMXK3KyXfzgD9EW+' - ' logger.go:42: 07:00:23 | watcher-tls-certs-change/1-deploy-with-tlse | HexV+da9g5Ix/I5gxku5bd0DWz/ovW+1o/uYMgB5ss9oqd8YKKSqkptXfAtoGToA' - ' logger.go:42: 07:00:23 | watcher-tls-certs-change/1-deploy-with-tlse | KG8QccenVwR7kEcCAwEAAaOBjzCBjDAOBgNVHQ8BAf8EBAMCBaAwEwYDVR0lBAww' - ' logger.go:42: 07:00:23 | watcher-tls-certs-change/1-deploy-with-tlse | CgYIKwYBBQUHAwEwDAYDVR0TAQH/BAIwADAfBgNVHSMEGDAWgBSy3xzAKuvmrfpX' - ' logger.go:42: 07:00:23 | watcher-tls-certs-change/1-deploy-with-tlse | EgmGJVm/ieLg8TA2BgNVHREBAf8ELDAqgih3YXRjaGVyLXB1YmxpYy53YXRjaGVy' - ' logger.go:42: 07:00:23 | watcher-tls-certs-change/1-deploy-with-tlse | LWt1dHRsLWRlZmF1bHQuc3ZjMA0GCSqGSIb3DQEBDAUAA4IBgQBmitaOVryjB1Fy' - ' logger.go:42: 07:00:23 | watcher-tls-certs-change/1-deploy-with-tlse | o+uQ7q1CfFEXu4uk57VffzLi6GFAot2u9gqfKm5Bx/JFKe0HvUHxfyum8oGhg8Mm' - ' logger.go:42: 07:00:23 | watcher-tls-certs-change/1-deploy-with-tlse | lF7bosLeepyWZeFYbXLZ3eCDrTyQRPPc8j6O8R9Ygg2wvIhwIrADK1/KTQuPVFW5' - ' logger.go:42: 07:00:23 | watcher-tls-certs-change/1-deploy-with-tlse | DVaC7MERErbmczp6DYiELdgcqlJimK3Vp/11qNtk1Kc7wVLI2Y36CzhwVMXOBW00' - ' logger.go:42: 07:00:23 | watcher-tls-certs-change/1-deploy-with-tlse | XXQxvYoWLH8WjhVMN9L0ZfTLdQIwL4gt0B3R7Uh8HR/qMCwW4RhIByG/el6wH1yh' - ' logger.go:42: 07:00:23 | watcher-tls-certs-change/1-deploy-with-tlse | D5gkY2Jkbf1vvmmw6MBLkx5O8aL145sW8fDKMGrmRj+eO5ALLeYMdmQZXqwZ4URw' - ' logger.go:42: 07:00:23 | watcher-tls-certs-change/1-deploy-with-tlse | X1tyjzN1ntrcBJf6xnuMLfT1v9RPMCT4CPeb23M5lqvwQXdrbiarkLQ9n/mBPUA6' - ' logger.go:42: 07:00:23 | watcher-tls-certs-change/1-deploy-with-tlse | lUYWBkFqYg9XnAgkWTeehQ7+AeOtvhj6gu3st6nEkOjJs9ri3Jris8kGL9Ay5yMy' - ' logger.go:42: 07:00:23 | watcher-tls-certs-change/1-deploy-with-tlse | af6ByIVzqJuVxr9GJiFIvhx+t2NEelQUiFnN1465Q67x/AELrfo=' - ' logger.go:42: 07:00:23 | watcher-tls-certs-change/1-deploy-with-tlse | -----END CERTIFICATE-----'' ''!='' ''-----BEGIN CERTIFICATE-----' - ' logger.go:42: 07:00:23 | watcher-tls-certs-change/1-deploy-with-tlse | MIIDsjCCAhqgAwIBAgIQFVNmGwlW4BZq0NMjqP97cTANBgkqhkiG9w0BAQwFADAY' - ' logger.go:42: 07:00:23 | watcher-tls-certs-change/1-deploy-with-tlse | MRYwFAYDVQQDEw1yb290Y2EtcHVibGljMB4XDTI2MDEyMjA3MDAwN1oXDTMxMDEy' - ' logger.go:42: 07:00:23 | watcher-tls-certs-change/1-deploy-with-tlse | MTA3MDAwN1owADCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAMpNXlNH' - ' logger.go:42: 07:00:23 | watcher-tls-certs-change/1-deploy-with-tlse | +PgDkPuiPtY2Iy+iOVTneCp31gtcOAI8JIp8fgCvk7KwdimyxIUQ2cYxsZPnG0cy' - ' logger.go:42: 07:00:23 | watcher-tls-certs-change/1-deploy-with-tlse | wL4XpOQoqp0ElzP3DPP1FUOAnx7ctKdash5DUzBT+84oA2Hab8yd3D9RWVXgYTYP' - ' logger.go:42: 07:00:23 | watcher-tls-certs-change/1-deploy-with-tlse | tgSvfD60T1dJeiiJa5kBj0Gfj+0gKtYNueYrOo65hlC6Smd2tP/7av+xLtQ2b1JS' - ' logger.go:42: 07:00:23 | watcher-tls-certs-change/1-deploy-with-tlse | tFAWNznSY+2n+8SyIH6jjDb/oBYfANPOIcfU/4X8dTGzLgkpMMXK3KyXfzgD9EW+' - ' logger.go:42: 07:00:23 | watcher-tls-certs-change/1-deploy-with-tlse | HexV+da9g5Ix/I5gxku5bd0DWz/ovW+1o/uYMgB5ss9oqd8YKKSqkptXfAtoGToA' - ' logger.go:42: 07:00:23 | watcher-tls-certs-change/1-deploy-with-tlse | KG8QccenVwR7kEcCAwEAAaOBjzCBjDAOBgNVHQ8BAf8EBAMCBaAwEwYDVR0lBAww' - ' logger.go:42: 07:00:23 | watcher-tls-certs-change/1-deploy-with-tlse | CgYIKwYBBQUHAwEwDAYDVR0TAQH/BAIwADAfBgNVHSMEGDAWgBSy3xzAKuvmrfpX' - ' logger.go:42: 07:00:23 | watcher-tls-certs-change/1-deploy-with-tlse | EgmGJVm/ieLg8TA2BgNVHREBAf8ELDAqgih3YXRjaGVyLXB1YmxpYy53YXRjaGVy' - ' logger.go:42: 07:00:23 | watcher-tls-certs-change/1-deploy-with-tlse | LWt1dHRsLWRlZmF1bHQuc3ZjMA0GCSqGSIb3DQEBDAUAA4IBgQBmitaOVryjB1Fy' - ' logger.go:42: 07:00:23 | watcher-tls-certs-change/1-deploy-with-tlse | o+uQ7q1CfFEXu4uk57VffzLi6GFAot2u9gqfKm5Bx/JFKe0HvUHxfyum8oGhg8Mm' - ' logger.go:42: 07:00:23 | watcher-tls-certs-change/1-deploy-with-tlse | lF7bosLeepyWZeFYbXLZ3eCDrTyQRPPc8j6O8R9Ygg2wvIhwIrADK1/KTQuPVFW5' - ' logger.go:42: 07:00:23 | watcher-tls-certs-change/1-deploy-with-tlse | DVaC7MERErbmczp6DYiELdgcqlJimK3Vp/11qNtk1Kc7wVLI2Y36CzhwVMXOBW00' - ' logger.go:42: 07:00:23 | watcher-tls-certs-change/1-deploy-with-tlse | XXQxvYoWLH8WjhVMN9L0ZfTLdQIwL4gt0B3R7Uh8HR/qMCwW4RhIByG/el6wH1yh' - ' logger.go:42: 07:00:23 | watcher-tls-certs-change/1-deploy-with-tlse | D5gkY2Jkbf1vvmmw6MBLkx5O8aL145sW8fDKMGrmRj+eO5ALLeYMdmQZXqwZ4URw' - ' logger.go:42: 07:00:23 | watcher-tls-certs-change/1-deploy-with-tlse | X1tyjzN1ntrcBJf6xnuMLfT1v9RPMCT4CPeb23M5lqvwQXdrbiarkLQ9n/mBPUA6' - ' logger.go:42: 07:00:23 | watcher-tls-certs-change/1-deploy-with-tlse | lUYWBkFqYg9XnAgkWTeehQ7+AeOtvhj6gu3st6nEkOjJs9ri3Jris8kGL9Ay5yMy' - ' logger.go:42: 07:00:23 | watcher-tls-certs-change/1-deploy-with-tlse | af6ByIVzqJuVxr9GJiFIvhx+t2NEelQUiFnN1465Q67x/AELrfo=' - ' logger.go:42: 07:00:23 | watcher-tls-certs-change/1-deploy-with-tlse | -----END CERTIFICATE-----'' '']''' - ' logger.go:42: 07:00:23 | watcher-tls-certs-change/1-deploy-with-tlse | ++ oc rsh -n watcher-kuttl-default -c watcher-api watcher-kuttl-api-0 cat /etc/pki/tls/certs/internal.crt' - ' logger.go:42: 07:00:23 | watcher-tls-certs-change/1-deploy-with-tlse | + internal_svc_cert=''-----BEGIN CERTIFICATE-----' - ' logger.go:42: 07:00:23 | watcher-tls-certs-change/1-deploy-with-tlse | MIIDtzCCAh+gAwIBAgIRAMdtA66/UVErgQ6jB2Mo7g8wDQYJKoZIhvcNAQEMBQAw' - ' logger.go:42: 07:00:23 | watcher-tls-certs-change/1-deploy-with-tlse | GjEYMBYGA1UEAxMPcm9vdGNhLWludGVybmFsMB4XDTI2MDEyMjA3MDAwN1oXDTMx' - ' logger.go:42: 07:00:23 | watcher-tls-certs-change/1-deploy-with-tlse | MDEyMTA3MDAwN1owADCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAOzY' - ' logger.go:42: 07:00:23 | watcher-tls-certs-change/1-deploy-with-tlse | hUVeRLlVB60sJbdpaQAJI2vpbeEEDa91MsWoYlWF58nvtb17k8KGvpPNemDF4UvD' - ' logger.go:42: 07:00:23 | watcher-tls-certs-change/1-deploy-with-tlse | kRm+oSlbUhWwlbwI1Q3VFz4JhvU12PvdUhV4e5PbNGULUDGpAbr04w098HEQSUj+' - ' logger.go:42: 07:00:23 | watcher-tls-certs-change/1-deploy-with-tlse | D/44rQFA0PbvLa//wISJcF6NG2kZiiAFAlrfYJH5tJM2nbdDOvHRY2FScNwLw3aE' - ' logger.go:42: 07:00:23 | watcher-tls-certs-change/1-deploy-with-tlse | 1UGIp9tmMA1on0l+9tBcndNzSq8klq5TdwDo8Pwrqz8m37ZXvwD1gizzDXwZhiPN' - ' logger.go:42: 07:00:23 | watcher-tls-certs-change/1-deploy-with-tlse | KIvLKp0hAxvswvj5Rib80zGQX6JjJEcOBFZ4GT/e8WUjrsJ6JsTygAtF9Y8xwchO' - ' logger.go:42: 07:00:23 | watcher-tls-certs-change/1-deploy-with-tlse | Ce4xdKeeGZ8hDK2ojFcCAwEAAaOBkTCBjjAOBgNVHQ8BAf8EBAMCBaAwEwYDVR0l' - ' logger.go:42: 07:00:23 | watcher-tls-certs-change/1-deploy-with-tlse | BAwwCgYIKwYBBQUHAwEwDAYDVR0TAQH/BAIwADAfBgNVHSMEGDAWgBSEvgxWYqXo' - ' logger.go:42: 07:00:23 | watcher-tls-certs-change/1-deploy-with-tlse | VaasRUQLJzZAsO/HJTA4BgNVHREBAf8ELjAsgip3YXRjaGVyLWludGVybmFsLndh' - ' logger.go:42: 07:00:23 | watcher-tls-certs-change/1-deploy-with-tlse | dGNoZXIta3V0dGwtZGVmYXVsdC5zdmMwDQYJKoZIhvcNAQEMBQADggGBAJzAl8kI' - ' logger.go:42: 07:00:23 | watcher-tls-certs-change/1-deploy-with-tlse | fRCmnFBnPIq9+3XfefG0pybCCtUusBuoATJYDHmH/379Rb4zeN6bqXg7PP36qVao' - ' logger.go:42: 07:00:23 | watcher-tls-certs-change/1-deploy-with-tlse | 98lAd5LFEieOBPJq2+QXo8CWrLcbItzbOHdFeVStfpuQqaD2OMYLLepP/IpAKT+0' - ' logger.go:42: 07:00:23 | watcher-tls-certs-change/1-deploy-with-tlse | +GEHqjJsaHF3xj6g5HcvQzhSaQh3Pg8ffKmqJKQg95PF1BPQ110MawKYh9MBibPr' - ' logger.go:42: 07:00:23 | watcher-tls-certs-change/1-deploy-with-tlse | b0myrALIel2CeIeWqm+VMrMI8en/gFxJWTfg58GcQbRiUDzWa2BHE1mEoXmOLNDt' - ' logger.go:42: 07:00:23 | watcher-tls-certs-change/1-deploy-with-tlse | L1iYWnFugdfAycs2+uAkPMbpav3Oi3WJX1N1iKRklNv5Cq8oQ++b8aoR2RZPnJJE' - ' logger.go:42: 07:00:23 | watcher-tls-certs-change/1-deploy-with-tlse | ppmkEBU4Kt94NO5B/A2PnjkM8+yOQjhjVnyD5Ccp8kHx0MN7H1OW9J0m4JEPjP/c' - ' logger.go:42: 07:00:23 | watcher-tls-certs-change/1-deploy-with-tlse | uRTxWb4yoJl6Igdfy3tYgldteKztKgf4tDlkR/Htsf8GVL+zbsnTseYY5BakyygP' - ' logger.go:42: 07:00:23 | watcher-tls-certs-change/1-deploy-with-tlse | AHnxS0D1QCA5Zv5craVd+a2ZCNvbnSdl4pwShL796A0cNZGnAKDD1LwcYQ==' - ' logger.go:42: 07:00:23 | watcher-tls-certs-change/1-deploy-with-tlse | -----END CERTIFICATE-----''' - ' logger.go:42: 07:00:23 | watcher-tls-certs-change/1-deploy-with-tlse | ++ base64 --decode' - ' logger.go:42: 07:00:23 | watcher-tls-certs-change/1-deploy-with-tlse | ++ oc get -n watcher-kuttl-default secret cert-watcher-internal-svc -o ''jsonpath={.data.tls\.crt}''' - ' logger.go:42: 07:00:23 | watcher-tls-certs-change/1-deploy-with-tlse | + internal_secret_cert=''-----BEGIN CERTIFICATE-----' - ' logger.go:42: 07:00:23 | watcher-tls-certs-change/1-deploy-with-tlse | MIIDtzCCAh+gAwIBAgIRAMdtA66/UVErgQ6jB2Mo7g8wDQYJKoZIhvcNAQEMBQAw' - ' logger.go:42: 07:00:23 | watcher-tls-certs-change/1-deploy-with-tlse | GjEYMBYGA1UEAxMPcm9vdGNhLWludGVybmFsMB4XDTI2MDEyMjA3MDAwN1oXDTMx' - ' logger.go:42: 07:00:23 | watcher-tls-certs-change/1-deploy-with-tlse | MDEyMTA3MDAwN1owADCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAOzY' - ' logger.go:42: 07:00:23 | watcher-tls-certs-change/1-deploy-with-tlse | hUVeRLlVB60sJbdpaQAJI2vpbeEEDa91MsWoYlWF58nvtb17k8KGvpPNemDF4UvD' - ' logger.go:42: 07:00:23 | watcher-tls-certs-change/1-deploy-with-tlse | kRm+oSlbUhWwlbwI1Q3VFz4JhvU12PvdUhV4e5PbNGULUDGpAbr04w098HEQSUj+' - ' logger.go:42: 07:00:23 | watcher-tls-certs-change/1-deploy-with-tlse | D/44rQFA0PbvLa//wISJcF6NG2kZiiAFAlrfYJH5tJM2nbdDOvHRY2FScNwLw3aE' - ' logger.go:42: 07:00:23 | watcher-tls-certs-change/1-deploy-with-tlse | 1UGIp9tmMA1on0l+9tBcndNzSq8klq5TdwDo8Pwrqz8m37ZXvwD1gizzDXwZhiPN' - ' logger.go:42: 07:00:23 | watcher-tls-certs-change/1-deploy-with-tlse | KIvLKp0hAxvswvj5Rib80zGQX6JjJEcOBFZ4GT/e8WUjrsJ6JsTygAtF9Y8xwchO' - ' logger.go:42: 07:00:23 | watcher-tls-certs-change/1-deploy-with-tlse | Ce4xdKeeGZ8hDK2ojFcCAwEAAaOBkTCBjjAOBgNVHQ8BAf8EBAMCBaAwEwYDVR0l' - ' logger.go:42: 07:00:23 | watcher-tls-certs-change/1-deploy-with-tlse | BAwwCgYIKwYBBQUHAwEwDAYDVR0TAQH/BAIwADAfBgNVHSMEGDAWgBSEvgxWYqXo' - ' logger.go:42: 07:00:23 | watcher-tls-certs-change/1-deploy-with-tlse | VaasRUQLJzZAsO/HJTA4BgNVHREBAf8ELjAsgip3YXRjaGVyLWludGVybmFsLndh' - ' logger.go:42: 07:00:23 | watcher-tls-certs-change/1-deploy-with-tlse | dGNoZXIta3V0dGwtZGVmYXVsdC5zdmMwDQYJKoZIhvcNAQEMBQADggGBAJzAl8kI' - ' logger.go:42: 07:00:23 | watcher-tls-certs-change/1-deploy-with-tlse | fRCmnFBnPIq9+3XfefG0pybCCtUusBuoATJYDHmH/379Rb4zeN6bqXg7PP36qVao' - ' logger.go:42: 07:00:23 | watcher-tls-certs-change/1-deploy-with-tlse | 98lAd5LFEieOBPJq2+QXo8CWrLcbItzbOHdFeVStfpuQqaD2OMYLLepP/IpAKT+0' - ' logger.go:42: 07:00:23 | watcher-tls-certs-change/1-deploy-with-tlse | +GEHqjJsaHF3xj6g5HcvQzhSaQh3Pg8ffKmqJKQg95PF1BPQ110MawKYh9MBibPr' - ' logger.go:42: 07:00:23 | watcher-tls-certs-change/1-deploy-with-tlse | b0myrALIel2CeIeWqm+VMrMI8en/gFxJWTfg58GcQbRiUDzWa2BHE1mEoXmOLNDt' - ' logger.go:42: 07:00:23 | watcher-tls-certs-change/1-deploy-with-tlse | L1iYWnFugdfAycs2+uAkPMbpav3Oi3WJX1N1iKRklNv5Cq8oQ++b8aoR2RZPnJJE' - ' logger.go:42: 07:00:23 | watcher-tls-certs-change/1-deploy-with-tlse | ppmkEBU4Kt94NO5B/A2PnjkM8+yOQjhjVnyD5Ccp8kHx0MN7H1OW9J0m4JEPjP/c' - ' logger.go:42: 07:00:23 | watcher-tls-certs-change/1-deploy-with-tlse | uRTxWb4yoJl6Igdfy3tYgldteKztKgf4tDlkR/Htsf8GVL+zbsnTseYY5BakyygP' - ' logger.go:42: 07:00:23 | watcher-tls-certs-change/1-deploy-with-tlse | AHnxS0D1QCA5Zv5craVd+a2ZCNvbnSdl4pwShL796A0cNZGnAKDD1LwcYQ==' - ' logger.go:42: 07:00:23 | watcher-tls-certs-change/1-deploy-with-tlse | -----END CERTIFICATE-----''' - ' logger.go:42: 07:00:23 | watcher-tls-certs-change/1-deploy-with-tlse | + ''['' ''-----BEGIN CERTIFICATE-----' - ' logger.go:42: 07:00:23 | watcher-tls-certs-change/1-deploy-with-tlse | MIIDtzCCAh+gAwIBAgIRAMdtA66/UVErgQ6jB2Mo7g8wDQYJKoZIhvcNAQEMBQAw' - ' logger.go:42: 07:00:23 | watcher-tls-certs-change/1-deploy-with-tlse | GjEYMBYGA1UEAxMPcm9vdGNhLWludGVybmFsMB4XDTI2MDEyMjA3MDAwN1oXDTMx' - ' logger.go:42: 07:00:23 | watcher-tls-certs-change/1-deploy-with-tlse | MDEyMTA3MDAwN1owADCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAOzY' - ' logger.go:42: 07:00:23 | watcher-tls-certs-change/1-deploy-with-tlse | hUVeRLlVB60sJbdpaQAJI2vpbeEEDa91MsWoYlWF58nvtb17k8KGvpPNemDF4UvD' - ' logger.go:42: 07:00:23 | watcher-tls-certs-change/1-deploy-with-tlse | kRm+oSlbUhWwlbwI1Q3VFz4JhvU12PvdUhV4e5PbNGULUDGpAbr04w098HEQSUj+' - ' logger.go:42: 07:00:23 | watcher-tls-certs-change/1-deploy-with-tlse | D/44rQFA0PbvLa//wISJcF6NG2kZiiAFAlrfYJH5tJM2nbdDOvHRY2FScNwLw3aE' - ' logger.go:42: 07:00:23 | watcher-tls-certs-change/1-deploy-with-tlse | 1UGIp9tmMA1on0l+9tBcndNzSq8klq5TdwDo8Pwrqz8m37ZXvwD1gizzDXwZhiPN' - ' logger.go:42: 07:00:23 | watcher-tls-certs-change/1-deploy-with-tlse | KIvLKp0hAxvswvj5Rib80zGQX6JjJEcOBFZ4GT/e8WUjrsJ6JsTygAtF9Y8xwchO' - ' logger.go:42: 07:00:23 | watcher-tls-certs-change/1-deploy-with-tlse | Ce4xdKeeGZ8hDK2ojFcCAwEAAaOBkTCBjjAOBgNVHQ8BAf8EBAMCBaAwEwYDVR0l' - ' logger.go:42: 07:00:23 | watcher-tls-certs-change/1-deploy-with-tlse | BAwwCgYIKwYBBQUHAwEwDAYDVR0TAQH/BAIwADAfBgNVHSMEGDAWgBSEvgxWYqXo' - ' logger.go:42: 07:00:23 | watcher-tls-certs-change/1-deploy-with-tlse | VaasRUQLJzZAsO/HJTA4BgNVHREBAf8ELjAsgip3YXRjaGVyLWludGVybmFsLndh' - ' logger.go:42: 07:00:23 | watcher-tls-certs-change/1-deploy-with-tlse | dGNoZXIta3V0dGwtZGVmYXVsdC5zdmMwDQYJKoZIhvcNAQEMBQADggGBAJzAl8kI' - ' logger.go:42: 07:00:23 | watcher-tls-certs-change/1-deploy-with-tlse | fRCmnFBnPIq9+3XfefG0pybCCtUusBuoATJYDHmH/379Rb4zeN6bqXg7PP36qVao' - ' logger.go:42: 07:00:23 | watcher-tls-certs-change/1-deploy-with-tlse | 98lAd5LFEieOBPJq2+QXo8CWrLcbItzbOHdFeVStfpuQqaD2OMYLLepP/IpAKT+0' - ' logger.go:42: 07:00:23 | watcher-tls-certs-change/1-deploy-with-tlse | +GEHqjJsaHF3xj6g5HcvQzhSaQh3Pg8ffKmqJKQg95PF1BPQ110MawKYh9MBibPr' - ' logger.go:42: 07:00:23 | watcher-tls-certs-change/1-deploy-with-tlse | b0myrALIel2CeIeWqm+VMrMI8en/gFxJWTfg58GcQbRiUDzWa2BHE1mEoXmOLNDt' - ' logger.go:42: 07:00:23 | watcher-tls-certs-change/1-deploy-with-tlse | L1iYWnFugdfAycs2+uAkPMbpav3Oi3WJX1N1iKRklNv5Cq8oQ++b8aoR2RZPnJJE' - ' logger.go:42: 07:00:23 | watcher-tls-certs-change/1-deploy-with-tlse | ppmkEBU4Kt94NO5B/A2PnjkM8+yOQjhjVnyD5Ccp8kHx0MN7H1OW9J0m4JEPjP/c' - ' logger.go:42: 07:00:23 | watcher-tls-certs-change/1-deploy-with-tlse | uRTxWb4yoJl6Igdfy3tYgldteKztKgf4tDlkR/Htsf8GVL+zbsnTseYY5BakyygP' - ' logger.go:42: 07:00:23 | watcher-tls-certs-change/1-deploy-with-tlse | AHnxS0D1QCA5Zv5craVd+a2ZCNvbnSdl4pwShL796A0cNZGnAKDD1LwcYQ==' - ' logger.go:42: 07:00:23 | watcher-tls-certs-change/1-deploy-with-tlse | -----END CERTIFICATE-----'' ''!='' ''-----BEGIN CERTIFICATE-----' - ' logger.go:42: 07:00:23 | watcher-tls-certs-change/1-deploy-with-tlse | MIIDtzCCAh+gAwIBAgIRAMdtA66/UVErgQ6jB2Mo7g8wDQYJKoZIhvcNAQEMBQAw' - ' logger.go:42: 07:00:23 | watcher-tls-certs-change/1-deploy-with-tlse | GjEYMBYGA1UEAxMPcm9vdGNhLWludGVybmFsMB4XDTI2MDEyMjA3MDAwN1oXDTMx' - ' logger.go:42: 07:00:23 | watcher-tls-certs-change/1-deploy-with-tlse | MDEyMTA3MDAwN1owADCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAOzY' - ' logger.go:42: 07:00:23 | watcher-tls-certs-change/1-deploy-with-tlse | hUVeRLlVB60sJbdpaQAJI2vpbeEEDa91MsWoYlWF58nvtb17k8KGvpPNemDF4UvD' - ' logger.go:42: 07:00:23 | watcher-tls-certs-change/1-deploy-with-tlse | kRm+oSlbUhWwlbwI1Q3VFz4JhvU12PvdUhV4e5PbNGULUDGpAbr04w098HEQSUj+' - ' logger.go:42: 07:00:23 | watcher-tls-certs-change/1-deploy-with-tlse | D/44rQFA0PbvLa//wISJcF6NG2kZiiAFAlrfYJH5tJM2nbdDOvHRY2FScNwLw3aE' - ' logger.go:42: 07:00:23 | watcher-tls-certs-change/1-deploy-with-tlse | 1UGIp9tmMA1on0l+9tBcndNzSq8klq5TdwDo8Pwrqz8m37ZXvwD1gizzDXwZhiPN' - ' logger.go:42: 07:00:23 | watcher-tls-certs-change/1-deploy-with-tlse | KIvLKp0hAxvswvj5Rib80zGQX6JjJEcOBFZ4GT/e8WUjrsJ6JsTygAtF9Y8xwchO' - ' logger.go:42: 07:00:23 | watcher-tls-certs-change/1-deploy-with-tlse | Ce4xdKeeGZ8hDK2ojFcCAwEAAaOBkTCBjjAOBgNVHQ8BAf8EBAMCBaAwEwYDVR0l' - ' logger.go:42: 07:00:23 | watcher-tls-certs-change/1-deploy-with-tlse | BAwwCgYIKwYBBQUHAwEwDAYDVR0TAQH/BAIwADAfBgNVHSMEGDAWgBSEvgxWYqXo' - ' logger.go:42: 07:00:23 | watcher-tls-certs-change/1-deploy-with-tlse | VaasRUQLJzZAsO/HJTA4BgNVHREBAf8ELjAsgip3YXRjaGVyLWludGVybmFsLndh' - ' logger.go:42: 07:00:23 | watcher-tls-certs-change/1-deploy-with-tlse | dGNoZXIta3V0dGwtZGVmYXVsdC5zdmMwDQYJKoZIhvcNAQEMBQADggGBAJzAl8kI' - ' logger.go:42: 07:00:23 | watcher-tls-certs-change/1-deploy-with-tlse | fRCmnFBnPIq9+3XfefG0pybCCtUusBuoATJYDHmH/379Rb4zeN6bqXg7PP36qVao' - ' logger.go:42: 07:00:23 | watcher-tls-certs-change/1-deploy-with-tlse | 98lAd5LFEieOBPJq2+QXo8CWrLcbItzbOHdFeVStfpuQqaD2OMYLLepP/IpAKT+0' - ' logger.go:42: 07:00:23 | watcher-tls-certs-change/1-deploy-with-tlse | +GEHqjJsaHF3xj6g5HcvQzhSaQh3Pg8ffKmqJKQg95PF1BPQ110MawKYh9MBibPr' - ' logger.go:42: 07:00:23 | watcher-tls-certs-change/1-deploy-with-tlse | b0myrALIel2CeIeWqm+VMrMI8en/gFxJWTfg58GcQbRiUDzWa2BHE1mEoXmOLNDt' - ' logger.go:42: 07:00:23 | watcher-tls-certs-change/1-deploy-with-tlse | L1iYWnFugdfAycs2+uAkPMbpav3Oi3WJX1N1iKRklNv5Cq8oQ++b8aoR2RZPnJJE' - ' logger.go:42: 07:00:23 | watcher-tls-certs-change/1-deploy-with-tlse | ppmkEBU4Kt94NO5B/A2PnjkM8+yOQjhjVnyD5Ccp8kHx0MN7H1OW9J0m4JEPjP/c' - ' logger.go:42: 07:00:23 | watcher-tls-certs-change/1-deploy-with-tlse | uRTxWb4yoJl6Igdfy3tYgldteKztKgf4tDlkR/Htsf8GVL+zbsnTseYY5BakyygP' - ' logger.go:42: 07:00:23 | watcher-tls-certs-change/1-deploy-with-tlse | AHnxS0D1QCA5Zv5craVd+a2ZCNvbnSdl4pwShL796A0cNZGnAKDD1LwcYQ==' - ' logger.go:42: 07:00:23 | watcher-tls-certs-change/1-deploy-with-tlse | -----END CERTIFICATE-----'' '']''' - ' logger.go:42: 07:00:24 | watcher-tls-certs-change/1-deploy-with-tlse | running command: [sh -c set -euxo pipefail' - ' public_svc_cert=$(oc rsh -n $NAMESPACE -c watcher-api watcher-kuttl-api-0 cat /etc/pki/tls/certs/public.crt)' - ' public_secret_cert=$(oc get -n $NAMESPACE secret cert-watcher-public-svc -o jsonpath=''{.data.tls\.crt}'' | base64 --decode)' - ' # ensure that the svc secret and cert secret match' - ' if [ "${public_svc_cert}" != "${public_secret_cert}" ]; then' - ' exit 1' - ' fi' - ' ' - ' internal_svc_cert=$(oc rsh -n $NAMESPACE -c watcher-api watcher-kuttl-api-0 cat /etc/pki/tls/certs/internal.crt)' - ' internal_secret_cert=$(oc get -n $NAMESPACE secret cert-watcher-internal-svc -o jsonpath=''{.data.tls\.crt}'' | base64 --decode)' - ' # ensure that the svc secret and cert secret match' - ' if [ "${internal_svc_cert}" != "${internal_secret_cert}" ]; then' - ' exit 1' - ' fi' - ' ]' - ' logger.go:42: 07:00:24 | watcher-tls-certs-change/1-deploy-with-tlse | ++ oc rsh -n watcher-kuttl-default -c watcher-api watcher-kuttl-api-0 cat /etc/pki/tls/certs/public.crt' - ' logger.go:42: 07:00:24 | watcher-tls-certs-change/1-deploy-with-tlse | + public_svc_cert=''-----BEGIN CERTIFICATE-----' - ' logger.go:42: 07:00:24 | watcher-tls-certs-change/1-deploy-with-tlse | MIIDsjCCAhqgAwIBAgIQFVNmGwlW4BZq0NMjqP97cTANBgkqhkiG9w0BAQwFADAY' - ' logger.go:42: 07:00:24 | watcher-tls-certs-change/1-deploy-with-tlse | MRYwFAYDVQQDEw1yb290Y2EtcHVibGljMB4XDTI2MDEyMjA3MDAwN1oXDTMxMDEy' - ' logger.go:42: 07:00:24 | watcher-tls-certs-change/1-deploy-with-tlse | MTA3MDAwN1owADCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAMpNXlNH' - ' logger.go:42: 07:00:24 | watcher-tls-certs-change/1-deploy-with-tlse | +PgDkPuiPtY2Iy+iOVTneCp31gtcOAI8JIp8fgCvk7KwdimyxIUQ2cYxsZPnG0cy' - ' logger.go:42: 07:00:24 | watcher-tls-certs-change/1-deploy-with-tlse | wL4XpOQoqp0ElzP3DPP1FUOAnx7ctKdash5DUzBT+84oA2Hab8yd3D9RWVXgYTYP' - ' logger.go:42: 07:00:24 | watcher-tls-certs-change/1-deploy-with-tlse | tgSvfD60T1dJeiiJa5kBj0Gfj+0gKtYNueYrOo65hlC6Smd2tP/7av+xLtQ2b1JS' - ' logger.go:42: 07:00:24 | watcher-tls-certs-change/1-deploy-with-tlse | tFAWNznSY+2n+8SyIH6jjDb/oBYfANPOIcfU/4X8dTGzLgkpMMXK3KyXfzgD9EW+' - ' logger.go:42: 07:00:24 | watcher-tls-certs-change/1-deploy-with-tlse | HexV+da9g5Ix/I5gxku5bd0DWz/ovW+1o/uYMgB5ss9oqd8YKKSqkptXfAtoGToA' - ' logger.go:42: 07:00:24 | watcher-tls-certs-change/1-deploy-with-tlse | KG8QccenVwR7kEcCAwEAAaOBjzCBjDAOBgNVHQ8BAf8EBAMCBaAwEwYDVR0lBAww' - ' logger.go:42: 07:00:24 | watcher-tls-certs-change/1-deploy-with-tlse | CgYIKwYBBQUHAwEwDAYDVR0TAQH/BAIwADAfBgNVHSMEGDAWgBSy3xzAKuvmrfpX' - ' logger.go:42: 07:00:24 | watcher-tls-certs-change/1-deploy-with-tlse | EgmGJVm/ieLg8TA2BgNVHREBAf8ELDAqgih3YXRjaGVyLXB1YmxpYy53YXRjaGVy' - ' logger.go:42: 07:00:24 | watcher-tls-certs-change/1-deploy-with-tlse | LWt1dHRsLWRlZmF1bHQuc3ZjMA0GCSqGSIb3DQEBDAUAA4IBgQBmitaOVryjB1Fy' - ' logger.go:42: 07:00:24 | watcher-tls-certs-change/1-deploy-with-tlse | o+uQ7q1CfFEXu4uk57VffzLi6GFAot2u9gqfKm5Bx/JFKe0HvUHxfyum8oGhg8Mm' - ' logger.go:42: 07:00:24 | watcher-tls-certs-change/1-deploy-with-tlse | lF7bosLeepyWZeFYbXLZ3eCDrTyQRPPc8j6O8R9Ygg2wvIhwIrADK1/KTQuPVFW5' - ' logger.go:42: 07:00:24 | watcher-tls-certs-change/1-deploy-with-tlse | DVaC7MERErbmczp6DYiELdgcqlJimK3Vp/11qNtk1Kc7wVLI2Y36CzhwVMXOBW00' - ' logger.go:42: 07:00:24 | watcher-tls-certs-change/1-deploy-with-tlse | XXQxvYoWLH8WjhVMN9L0ZfTLdQIwL4gt0B3R7Uh8HR/qMCwW4RhIByG/el6wH1yh' - ' logger.go:42: 07:00:24 | watcher-tls-certs-change/1-deploy-with-tlse | D5gkY2Jkbf1vvmmw6MBLkx5O8aL145sW8fDKMGrmRj+eO5ALLeYMdmQZXqwZ4URw' - ' logger.go:42: 07:00:24 | watcher-tls-certs-change/1-deploy-with-tlse | X1tyjzN1ntrcBJf6xnuMLfT1v9RPMCT4CPeb23M5lqvwQXdrbiarkLQ9n/mBPUA6' - ' logger.go:42: 07:00:24 | watcher-tls-certs-change/1-deploy-with-tlse | lUYWBkFqYg9XnAgkWTeehQ7+AeOtvhj6gu3st6nEkOjJs9ri3Jris8kGL9Ay5yMy' - ' logger.go:42: 07:00:24 | watcher-tls-certs-change/1-deploy-with-tlse | af6ByIVzqJuVxr9GJiFIvhx+t2NEelQUiFnN1465Q67x/AELrfo=' - ' logger.go:42: 07:00:24 | watcher-tls-certs-change/1-deploy-with-tlse | -----END CERTIFICATE-----''' - ' logger.go:42: 07:00:24 | watcher-tls-certs-change/1-deploy-with-tlse | ++ oc get -n watcher-kuttl-default secret cert-watcher-public-svc -o ''jsonpath={.data.tls\.crt}''' - ' logger.go:42: 07:00:24 | watcher-tls-certs-change/1-deploy-with-tlse | ++ base64 --decode' - ' logger.go:42: 07:00:25 | watcher-tls-certs-change/1-deploy-with-tlse | + public_secret_cert=''-----BEGIN CERTIFICATE-----' - ' logger.go:42: 07:00:25 | watcher-tls-certs-change/1-deploy-with-tlse | MIIDsjCCAhqgAwIBAgIQFVNmGwlW4BZq0NMjqP97cTANBgkqhkiG9w0BAQwFADAY' - ' logger.go:42: 07:00:25 | watcher-tls-certs-change/1-deploy-with-tlse | MRYwFAYDVQQDEw1yb290Y2EtcHVibGljMB4XDTI2MDEyMjA3MDAwN1oXDTMxMDEy' - ' logger.go:42: 07:00:25 | watcher-tls-certs-change/1-deploy-with-tlse | MTA3MDAwN1owADCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAMpNXlNH' - ' logger.go:42: 07:00:25 | watcher-tls-certs-change/1-deploy-with-tlse | +PgDkPuiPtY2Iy+iOVTneCp31gtcOAI8JIp8fgCvk7KwdimyxIUQ2cYxsZPnG0cy' - ' logger.go:42: 07:00:25 | watcher-tls-certs-change/1-deploy-with-tlse | wL4XpOQoqp0ElzP3DPP1FUOAnx7ctKdash5DUzBT+84oA2Hab8yd3D9RWVXgYTYP' - ' logger.go:42: 07:00:25 | watcher-tls-certs-change/1-deploy-with-tlse | tgSvfD60T1dJeiiJa5kBj0Gfj+0gKtYNueYrOo65hlC6Smd2tP/7av+xLtQ2b1JS' - ' logger.go:42: 07:00:25 | watcher-tls-certs-change/1-deploy-with-tlse | tFAWNznSY+2n+8SyIH6jjDb/oBYfANPOIcfU/4X8dTGzLgkpMMXK3KyXfzgD9EW+' - ' logger.go:42: 07:00:25 | watcher-tls-certs-change/1-deploy-with-tlse | HexV+da9g5Ix/I5gxku5bd0DWz/ovW+1o/uYMgB5ss9oqd8YKKSqkptXfAtoGToA' - ' logger.go:42: 07:00:25 | watcher-tls-certs-change/1-deploy-with-tlse | KG8QccenVwR7kEcCAwEAAaOBjzCBjDAOBgNVHQ8BAf8EBAMCBaAwEwYDVR0lBAww' - ' logger.go:42: 07:00:25 | watcher-tls-certs-change/1-deploy-with-tlse | CgYIKwYBBQUHAwEwDAYDVR0TAQH/BAIwADAfBgNVHSMEGDAWgBSy3xzAKuvmrfpX' - ' logger.go:42: 07:00:25 | watcher-tls-certs-change/1-deploy-with-tlse | EgmGJVm/ieLg8TA2BgNVHREBAf8ELDAqgih3YXRjaGVyLXB1YmxpYy53YXRjaGVy' - ' logger.go:42: 07:00:25 | watcher-tls-certs-change/1-deploy-with-tlse | LWt1dHRsLWRlZmF1bHQuc3ZjMA0GCSqGSIb3DQEBDAUAA4IBgQBmitaOVryjB1Fy' - ' logger.go:42: 07:00:25 | watcher-tls-certs-change/1-deploy-with-tlse | o+uQ7q1CfFEXu4uk57VffzLi6GFAot2u9gqfKm5Bx/JFKe0HvUHxfyum8oGhg8Mm' - ' logger.go:42: 07:00:25 | watcher-tls-certs-change/1-deploy-with-tlse | lF7bosLeepyWZeFYbXLZ3eCDrTyQRPPc8j6O8R9Ygg2wvIhwIrADK1/KTQuPVFW5' - ' logger.go:42: 07:00:25 | watcher-tls-certs-change/1-deploy-with-tlse | DVaC7MERErbmczp6DYiELdgcqlJimK3Vp/11qNtk1Kc7wVLI2Y36CzhwVMXOBW00' - ' logger.go:42: 07:00:25 | watcher-tls-certs-change/1-deploy-with-tlse | XXQxvYoWLH8WjhVMN9L0ZfTLdQIwL4gt0B3R7Uh8HR/qMCwW4RhIByG/el6wH1yh' - ' logger.go:42: 07:00:25 | watcher-tls-certs-change/1-deploy-with-tlse | D5gkY2Jkbf1vvmmw6MBLkx5O8aL145sW8fDKMGrmRj+eO5ALLeYMdmQZXqwZ4URw' - ' logger.go:42: 07:00:25 | watcher-tls-certs-change/1-deploy-with-tlse | X1tyjzN1ntrcBJf6xnuMLfT1v9RPMCT4CPeb23M5lqvwQXdrbiarkLQ9n/mBPUA6' - ' logger.go:42: 07:00:25 | watcher-tls-certs-change/1-deploy-with-tlse | lUYWBkFqYg9XnAgkWTeehQ7+AeOtvhj6gu3st6nEkOjJs9ri3Jris8kGL9Ay5yMy' - ' logger.go:42: 07:00:25 | watcher-tls-certs-change/1-deploy-with-tlse | af6ByIVzqJuVxr9GJiFIvhx+t2NEelQUiFnN1465Q67x/AELrfo=' - ' logger.go:42: 07:00:25 | watcher-tls-certs-change/1-deploy-with-tlse | -----END CERTIFICATE-----''' - ' logger.go:42: 07:00:25 | watcher-tls-certs-change/1-deploy-with-tlse | + ''['' ''-----BEGIN CERTIFICATE-----' - ' logger.go:42: 07:00:25 | watcher-tls-certs-change/1-deploy-with-tlse | MIIDsjCCAhqgAwIBAgIQFVNmGwlW4BZq0NMjqP97cTANBgkqhkiG9w0BAQwFADAY' - ' logger.go:42: 07:00:25 | watcher-tls-certs-change/1-deploy-with-tlse | MRYwFAYDVQQDEw1yb290Y2EtcHVibGljMB4XDTI2MDEyMjA3MDAwN1oXDTMxMDEy' - ' logger.go:42: 07:00:25 | watcher-tls-certs-change/1-deploy-with-tlse | MTA3MDAwN1owADCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAMpNXlNH' - ' logger.go:42: 07:00:25 | watcher-tls-certs-change/1-deploy-with-tlse | +PgDkPuiPtY2Iy+iOVTneCp31gtcOAI8JIp8fgCvk7KwdimyxIUQ2cYxsZPnG0cy' - ' logger.go:42: 07:00:25 | watcher-tls-certs-change/1-deploy-with-tlse | wL4XpOQoqp0ElzP3DPP1FUOAnx7ctKdash5DUzBT+84oA2Hab8yd3D9RWVXgYTYP' - ' logger.go:42: 07:00:25 | watcher-tls-certs-change/1-deploy-with-tlse | tgSvfD60T1dJeiiJa5kBj0Gfj+0gKtYNueYrOo65hlC6Smd2tP/7av+xLtQ2b1JS' - ' logger.go:42: 07:00:25 | watcher-tls-certs-change/1-deploy-with-tlse | tFAWNznSY+2n+8SyIH6jjDb/oBYfANPOIcfU/4X8dTGzLgkpMMXK3KyXfzgD9EW+' - ' logger.go:42: 07:00:25 | watcher-tls-certs-change/1-deploy-with-tlse | HexV+da9g5Ix/I5gxku5bd0DWz/ovW+1o/uYMgB5ss9oqd8YKKSqkptXfAtoGToA' - ' logger.go:42: 07:00:25 | watcher-tls-certs-change/1-deploy-with-tlse | KG8QccenVwR7kEcCAwEAAaOBjzCBjDAOBgNVHQ8BAf8EBAMCBaAwEwYDVR0lBAww' - ' logger.go:42: 07:00:25 | watcher-tls-certs-change/1-deploy-with-tlse | CgYIKwYBBQUHAwEwDAYDVR0TAQH/BAIwADAfBgNVHSMEGDAWgBSy3xzAKuvmrfpX' - ' logger.go:42: 07:00:25 | watcher-tls-certs-change/1-deploy-with-tlse | EgmGJVm/ieLg8TA2BgNVHREBAf8ELDAqgih3YXRjaGVyLXB1YmxpYy53YXRjaGVy' - ' logger.go:42: 07:00:25 | watcher-tls-certs-change/1-deploy-with-tlse | LWt1dHRsLWRlZmF1bHQuc3ZjMA0GCSqGSIb3DQEBDAUAA4IBgQBmitaOVryjB1Fy' - ' logger.go:42: 07:00:25 | watcher-tls-certs-change/1-deploy-with-tlse | o+uQ7q1CfFEXu4uk57VffzLi6GFAot2u9gqfKm5Bx/JFKe0HvUHxfyum8oGhg8Mm' - ' logger.go:42: 07:00:25 | watcher-tls-certs-change/1-deploy-with-tlse | lF7bosLeepyWZeFYbXLZ3eCDrTyQRPPc8j6O8R9Ygg2wvIhwIrADK1/KTQuPVFW5' - ' logger.go:42: 07:00:25 | watcher-tls-certs-change/1-deploy-with-tlse | DVaC7MERErbmczp6DYiELdgcqlJimK3Vp/11qNtk1Kc7wVLI2Y36CzhwVMXOBW00' - ' logger.go:42: 07:00:25 | watcher-tls-certs-change/1-deploy-with-tlse | XXQxvYoWLH8WjhVMN9L0ZfTLdQIwL4gt0B3R7Uh8HR/qMCwW4RhIByG/el6wH1yh' - ' logger.go:42: 07:00:25 | watcher-tls-certs-change/1-deploy-with-tlse | D5gkY2Jkbf1vvmmw6MBLkx5O8aL145sW8fDKMGrmRj+eO5ALLeYMdmQZXqwZ4URw' - ' logger.go:42: 07:00:25 | watcher-tls-certs-change/1-deploy-with-tlse | X1tyjzN1ntrcBJf6xnuMLfT1v9RPMCT4CPeb23M5lqvwQXdrbiarkLQ9n/mBPUA6' - ' logger.go:42: 07:00:25 | watcher-tls-certs-change/1-deploy-with-tlse | lUYWBkFqYg9XnAgkWTeehQ7+AeOtvhj6gu3st6nEkOjJs9ri3Jris8kGL9Ay5yMy' - ' logger.go:42: 07:00:25 | watcher-tls-certs-change/1-deploy-with-tlse | af6ByIVzqJuVxr9GJiFIvhx+t2NEelQUiFnN1465Q67x/AELrfo=' - ' logger.go:42: 07:00:25 | watcher-tls-certs-change/1-deploy-with-tlse | -----END CERTIFICATE-----'' ''!='' ''-----BEGIN CERTIFICATE-----' - ' logger.go:42: 07:00:25 | watcher-tls-certs-change/1-deploy-with-tlse | MIIDsjCCAhqgAwIBAgIQFVNmGwlW4BZq0NMjqP97cTANBgkqhkiG9w0BAQwFADAY' - ' logger.go:42: 07:00:25 | watcher-tls-certs-change/1-deploy-with-tlse | MRYwFAYDVQQDEw1yb290Y2EtcHVibGljMB4XDTI2MDEyMjA3MDAwN1oXDTMxMDEy' - ' logger.go:42: 07:00:25 | watcher-tls-certs-change/1-deploy-with-tlse | MTA3MDAwN1owADCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAMpNXlNH' - ' logger.go:42: 07:00:25 | watcher-tls-certs-change/1-deploy-with-tlse | +PgDkPuiPtY2Iy+iOVTneCp31gtcOAI8JIp8fgCvk7KwdimyxIUQ2cYxsZPnG0cy' - ' logger.go:42: 07:00:25 | watcher-tls-certs-change/1-deploy-with-tlse | wL4XpOQoqp0ElzP3DPP1FUOAnx7ctKdash5DUzBT+84oA2Hab8yd3D9RWVXgYTYP' - ' logger.go:42: 07:00:25 | watcher-tls-certs-change/1-deploy-with-tlse | tgSvfD60T1dJeiiJa5kBj0Gfj+0gKtYNueYrOo65hlC6Smd2tP/7av+xLtQ2b1JS' - ' logger.go:42: 07:00:25 | watcher-tls-certs-change/1-deploy-with-tlse | tFAWNznSY+2n+8SyIH6jjDb/oBYfANPOIcfU/4X8dTGzLgkpMMXK3KyXfzgD9EW+' - ' logger.go:42: 07:00:25 | watcher-tls-certs-change/1-deploy-with-tlse | HexV+da9g5Ix/I5gxku5bd0DWz/ovW+1o/uYMgB5ss9oqd8YKKSqkptXfAtoGToA' - ' logger.go:42: 07:00:25 | watcher-tls-certs-change/1-deploy-with-tlse | KG8QccenVwR7kEcCAwEAAaOBjzCBjDAOBgNVHQ8BAf8EBAMCBaAwEwYDVR0lBAww' - ' logger.go:42: 07:00:25 | watcher-tls-certs-change/1-deploy-with-tlse | CgYIKwYBBQUHAwEwDAYDVR0TAQH/BAIwADAfBgNVHSMEGDAWgBSy3xzAKuvmrfpX' - ' logger.go:42: 07:00:25 | watcher-tls-certs-change/1-deploy-with-tlse | EgmGJVm/ieLg8TA2BgNVHREBAf8ELDAqgih3YXRjaGVyLXB1YmxpYy53YXRjaGVy' - ' logger.go:42: 07:00:25 | watcher-tls-certs-change/1-deploy-with-tlse | LWt1dHRsLWRlZmF1bHQuc3ZjMA0GCSqGSIb3DQEBDAUAA4IBgQBmitaOVryjB1Fy' - ' logger.go:42: 07:00:25 | watcher-tls-certs-change/1-deploy-with-tlse | o+uQ7q1CfFEXu4uk57VffzLi6GFAot2u9gqfKm5Bx/JFKe0HvUHxfyum8oGhg8Mm' - ' logger.go:42: 07:00:25 | watcher-tls-certs-change/1-deploy-with-tlse | lF7bosLeepyWZeFYbXLZ3eCDrTyQRPPc8j6O8R9Ygg2wvIhwIrADK1/KTQuPVFW5' - ' logger.go:42: 07:00:25 | watcher-tls-certs-change/1-deploy-with-tlse | DVaC7MERErbmczp6DYiELdgcqlJimK3Vp/11qNtk1Kc7wVLI2Y36CzhwVMXOBW00' - ' logger.go:42: 07:00:25 | watcher-tls-certs-change/1-deploy-with-tlse | XXQxvYoWLH8WjhVMN9L0ZfTLdQIwL4gt0B3R7Uh8HR/qMCwW4RhIByG/el6wH1yh' - ' logger.go:42: 07:00:25 | watcher-tls-certs-change/1-deploy-with-tlse | D5gkY2Jkbf1vvmmw6MBLkx5O8aL145sW8fDKMGrmRj+eO5ALLeYMdmQZXqwZ4URw' - ' logger.go:42: 07:00:25 | watcher-tls-certs-change/1-deploy-with-tlse | X1tyjzN1ntrcBJf6xnuMLfT1v9RPMCT4CPeb23M5lqvwQXdrbiarkLQ9n/mBPUA6' - ' logger.go:42: 07:00:25 | watcher-tls-certs-change/1-deploy-with-tlse | lUYWBkFqYg9XnAgkWTeehQ7+AeOtvhj6gu3st6nEkOjJs9ri3Jris8kGL9Ay5yMy' - ' logger.go:42: 07:00:25 | watcher-tls-certs-change/1-deploy-with-tlse | af6ByIVzqJuVxr9GJiFIvhx+t2NEelQUiFnN1465Q67x/AELrfo=' - ' logger.go:42: 07:00:25 | watcher-tls-certs-change/1-deploy-with-tlse | -----END CERTIFICATE-----'' '']''' - ' logger.go:42: 07:00:25 | watcher-tls-certs-change/1-deploy-with-tlse | ++ oc rsh -n watcher-kuttl-default -c watcher-api watcher-kuttl-api-0 cat /etc/pki/tls/certs/internal.crt' - ' logger.go:42: 07:00:25 | watcher-tls-certs-change/1-deploy-with-tlse | + internal_svc_cert=''-----BEGIN CERTIFICATE-----' - ' logger.go:42: 07:00:25 | watcher-tls-certs-change/1-deploy-with-tlse | MIIDtzCCAh+gAwIBAgIRAMdtA66/UVErgQ6jB2Mo7g8wDQYJKoZIhvcNAQEMBQAw' - ' logger.go:42: 07:00:25 | watcher-tls-certs-change/1-deploy-with-tlse | GjEYMBYGA1UEAxMPcm9vdGNhLWludGVybmFsMB4XDTI2MDEyMjA3MDAwN1oXDTMx' - ' logger.go:42: 07:00:25 | watcher-tls-certs-change/1-deploy-with-tlse | MDEyMTA3MDAwN1owADCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAOzY' - ' logger.go:42: 07:00:25 | watcher-tls-certs-change/1-deploy-with-tlse | hUVeRLlVB60sJbdpaQAJI2vpbeEEDa91MsWoYlWF58nvtb17k8KGvpPNemDF4UvD' - ' logger.go:42: 07:00:25 | watcher-tls-certs-change/1-deploy-with-tlse | kRm+oSlbUhWwlbwI1Q3VFz4JhvU12PvdUhV4e5PbNGULUDGpAbr04w098HEQSUj+' - ' logger.go:42: 07:00:25 | watcher-tls-certs-change/1-deploy-with-tlse | D/44rQFA0PbvLa//wISJcF6NG2kZiiAFAlrfYJH5tJM2nbdDOvHRY2FScNwLw3aE' - ' logger.go:42: 07:00:25 | watcher-tls-certs-change/1-deploy-with-tlse | 1UGIp9tmMA1on0l+9tBcndNzSq8klq5TdwDo8Pwrqz8m37ZXvwD1gizzDXwZhiPN' - ' logger.go:42: 07:00:25 | watcher-tls-certs-change/1-deploy-with-tlse | KIvLKp0hAxvswvj5Rib80zGQX6JjJEcOBFZ4GT/e8WUjrsJ6JsTygAtF9Y8xwchO' - ' logger.go:42: 07:00:25 | watcher-tls-certs-change/1-deploy-with-tlse | Ce4xdKeeGZ8hDK2ojFcCAwEAAaOBkTCBjjAOBgNVHQ8BAf8EBAMCBaAwEwYDVR0l' - ' logger.go:42: 07:00:25 | watcher-tls-certs-change/1-deploy-with-tlse | BAwwCgYIKwYBBQUHAwEwDAYDVR0TAQH/BAIwADAfBgNVHSMEGDAWgBSEvgxWYqXo' - ' logger.go:42: 07:00:25 | watcher-tls-certs-change/1-deploy-with-tlse | VaasRUQLJzZAsO/HJTA4BgNVHREBAf8ELjAsgip3YXRjaGVyLWludGVybmFsLndh' - ' logger.go:42: 07:00:25 | watcher-tls-certs-change/1-deploy-with-tlse | dGNoZXIta3V0dGwtZGVmYXVsdC5zdmMwDQYJKoZIhvcNAQEMBQADggGBAJzAl8kI' - ' logger.go:42: 07:00:25 | watcher-tls-certs-change/1-deploy-with-tlse | fRCmnFBnPIq9+3XfefG0pybCCtUusBuoATJYDHmH/379Rb4zeN6bqXg7PP36qVao' - ' logger.go:42: 07:00:25 | watcher-tls-certs-change/1-deploy-with-tlse | 98lAd5LFEieOBPJq2+QXo8CWrLcbItzbOHdFeVStfpuQqaD2OMYLLepP/IpAKT+0' - ' logger.go:42: 07:00:25 | watcher-tls-certs-change/1-deploy-with-tlse | +GEHqjJsaHF3xj6g5HcvQzhSaQh3Pg8ffKmqJKQg95PF1BPQ110MawKYh9MBibPr' - ' logger.go:42: 07:00:25 | watcher-tls-certs-change/1-deploy-with-tlse | b0myrALIel2CeIeWqm+VMrMI8en/gFxJWTfg58GcQbRiUDzWa2BHE1mEoXmOLNDt' - ' logger.go:42: 07:00:25 | watcher-tls-certs-change/1-deploy-with-tlse | L1iYWnFugdfAycs2+uAkPMbpav3Oi3WJX1N1iKRklNv5Cq8oQ++b8aoR2RZPnJJE' - ' logger.go:42: 07:00:25 | watcher-tls-certs-change/1-deploy-with-tlse | ppmkEBU4Kt94NO5B/A2PnjkM8+yOQjhjVnyD5Ccp8kHx0MN7H1OW9J0m4JEPjP/c' - ' logger.go:42: 07:00:25 | watcher-tls-certs-change/1-deploy-with-tlse | uRTxWb4yoJl6Igdfy3tYgldteKztKgf4tDlkR/Htsf8GVL+zbsnTseYY5BakyygP' - ' logger.go:42: 07:00:25 | watcher-tls-certs-change/1-deploy-with-tlse | AHnxS0D1QCA5Zv5craVd+a2ZCNvbnSdl4pwShL796A0cNZGnAKDD1LwcYQ==' - ' logger.go:42: 07:00:25 | watcher-tls-certs-change/1-deploy-with-tlse | -----END CERTIFICATE-----''' - ' logger.go:42: 07:00:25 | watcher-tls-certs-change/1-deploy-with-tlse | ++ oc get -n watcher-kuttl-default secret cert-watcher-internal-svc -o ''jsonpath={.data.tls\.crt}''' - ' logger.go:42: 07:00:25 | watcher-tls-certs-change/1-deploy-with-tlse | ++ base64 --decode' - ' logger.go:42: 07:00:25 | watcher-tls-certs-change/1-deploy-with-tlse | + internal_secret_cert=''-----BEGIN CERTIFICATE-----' - ' logger.go:42: 07:00:25 | watcher-tls-certs-change/1-deploy-with-tlse | MIIDtzCCAh+gAwIBAgIRAMdtA66/UVErgQ6jB2Mo7g8wDQYJKoZIhvcNAQEMBQAw' - ' logger.go:42: 07:00:25 | watcher-tls-certs-change/1-deploy-with-tlse | GjEYMBYGA1UEAxMPcm9vdGNhLWludGVybmFsMB4XDTI2MDEyMjA3MDAwN1oXDTMx' - ' logger.go:42: 07:00:25 | watcher-tls-certs-change/1-deploy-with-tlse | MDEyMTA3MDAwN1owADCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAOzY' - ' logger.go:42: 07:00:25 | watcher-tls-certs-change/1-deploy-with-tlse | hUVeRLlVB60sJbdpaQAJI2vpbeEEDa91MsWoYlWF58nvtb17k8KGvpPNemDF4UvD' - ' logger.go:42: 07:00:25 | watcher-tls-certs-change/1-deploy-with-tlse | kRm+oSlbUhWwlbwI1Q3VFz4JhvU12PvdUhV4e5PbNGULUDGpAbr04w098HEQSUj+' - ' logger.go:42: 07:00:25 | watcher-tls-certs-change/1-deploy-with-tlse | D/44rQFA0PbvLa//wISJcF6NG2kZiiAFAlrfYJH5tJM2nbdDOvHRY2FScNwLw3aE' - ' logger.go:42: 07:00:25 | watcher-tls-certs-change/1-deploy-with-tlse | 1UGIp9tmMA1on0l+9tBcndNzSq8klq5TdwDo8Pwrqz8m37ZXvwD1gizzDXwZhiPN' - ' logger.go:42: 07:00:25 | watcher-tls-certs-change/1-deploy-with-tlse | KIvLKp0hAxvswvj5Rib80zGQX6JjJEcOBFZ4GT/e8WUjrsJ6JsTygAtF9Y8xwchO' - ' logger.go:42: 07:00:25 | watcher-tls-certs-change/1-deploy-with-tlse | Ce4xdKeeGZ8hDK2ojFcCAwEAAaOBkTCBjjAOBgNVHQ8BAf8EBAMCBaAwEwYDVR0l' - ' logger.go:42: 07:00:25 | watcher-tls-certs-change/1-deploy-with-tlse | BAwwCgYIKwYBBQUHAwEwDAYDVR0TAQH/BAIwADAfBgNVHSMEGDAWgBSEvgxWYqXo' - ' logger.go:42: 07:00:25 | watcher-tls-certs-change/1-deploy-with-tlse | VaasRUQLJzZAsO/HJTA4BgNVHREBAf8ELjAsgip3YXRjaGVyLWludGVybmFsLndh' - ' logger.go:42: 07:00:25 | watcher-tls-certs-change/1-deploy-with-tlse | dGNoZXIta3V0dGwtZGVmYXVsdC5zdmMwDQYJKoZIhvcNAQEMBQADggGBAJzAl8kI' - ' logger.go:42: 07:00:25 | watcher-tls-certs-change/1-deploy-with-tlse | fRCmnFBnPIq9+3XfefG0pybCCtUusBuoATJYDHmH/379Rb4zeN6bqXg7PP36qVao' - ' logger.go:42: 07:00:25 | watcher-tls-certs-change/1-deploy-with-tlse | 98lAd5LFEieOBPJq2+QXo8CWrLcbItzbOHdFeVStfpuQqaD2OMYLLepP/IpAKT+0' - ' logger.go:42: 07:00:25 | watcher-tls-certs-change/1-deploy-with-tlse | +GEHqjJsaHF3xj6g5HcvQzhSaQh3Pg8ffKmqJKQg95PF1BPQ110MawKYh9MBibPr' - ' logger.go:42: 07:00:25 | watcher-tls-certs-change/1-deploy-with-tlse | b0myrALIel2CeIeWqm+VMrMI8en/gFxJWTfg58GcQbRiUDzWa2BHE1mEoXmOLNDt' - ' logger.go:42: 07:00:25 | watcher-tls-certs-change/1-deploy-with-tlse | L1iYWnFugdfAycs2+uAkPMbpav3Oi3WJX1N1iKRklNv5Cq8oQ++b8aoR2RZPnJJE' - ' logger.go:42: 07:00:25 | watcher-tls-certs-change/1-deploy-with-tlse | ppmkEBU4Kt94NO5B/A2PnjkM8+yOQjhjVnyD5Ccp8kHx0MN7H1OW9J0m4JEPjP/c' - ' logger.go:42: 07:00:25 | watcher-tls-certs-change/1-deploy-with-tlse | uRTxWb4yoJl6Igdfy3tYgldteKztKgf4tDlkR/Htsf8GVL+zbsnTseYY5BakyygP' - ' logger.go:42: 07:00:25 | watcher-tls-certs-change/1-deploy-with-tlse | AHnxS0D1QCA5Zv5craVd+a2ZCNvbnSdl4pwShL796A0cNZGnAKDD1LwcYQ==' - ' logger.go:42: 07:00:25 | watcher-tls-certs-change/1-deploy-with-tlse | -----END CERTIFICATE-----''' - ' logger.go:42: 07:00:25 | watcher-tls-certs-change/1-deploy-with-tlse | + ''['' ''-----BEGIN CERTIFICATE-----' - ' logger.go:42: 07:00:25 | watcher-tls-certs-change/1-deploy-with-tlse | MIIDtzCCAh+gAwIBAgIRAMdtA66/UVErgQ6jB2Mo7g8wDQYJKoZIhvcNAQEMBQAw' - ' logger.go:42: 07:00:25 | watcher-tls-certs-change/1-deploy-with-tlse | GjEYMBYGA1UEAxMPcm9vdGNhLWludGVybmFsMB4XDTI2MDEyMjA3MDAwN1oXDTMx' - ' logger.go:42: 07:00:25 | watcher-tls-certs-change/1-deploy-with-tlse | MDEyMTA3MDAwN1owADCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAOzY' - ' logger.go:42: 07:00:25 | watcher-tls-certs-change/1-deploy-with-tlse | hUVeRLlVB60sJbdpaQAJI2vpbeEEDa91MsWoYlWF58nvtb17k8KGvpPNemDF4UvD' - ' logger.go:42: 07:00:25 | watcher-tls-certs-change/1-deploy-with-tlse | kRm+oSlbUhWwlbwI1Q3VFz4JhvU12PvdUhV4e5PbNGULUDGpAbr04w098HEQSUj+' - ' logger.go:42: 07:00:25 | watcher-tls-certs-change/1-deploy-with-tlse | D/44rQFA0PbvLa//wISJcF6NG2kZiiAFAlrfYJH5tJM2nbdDOvHRY2FScNwLw3aE' - ' logger.go:42: 07:00:25 | watcher-tls-certs-change/1-deploy-with-tlse | 1UGIp9tmMA1on0l+9tBcndNzSq8klq5TdwDo8Pwrqz8m37ZXvwD1gizzDXwZhiPN' - ' logger.go:42: 07:00:25 | watcher-tls-certs-change/1-deploy-with-tlse | KIvLKp0hAxvswvj5Rib80zGQX6JjJEcOBFZ4GT/e8WUjrsJ6JsTygAtF9Y8xwchO' - ' logger.go:42: 07:00:25 | watcher-tls-certs-change/1-deploy-with-tlse | Ce4xdKeeGZ8hDK2ojFcCAwEAAaOBkTCBjjAOBgNVHQ8BAf8EBAMCBaAwEwYDVR0l' - ' logger.go:42: 07:00:25 | watcher-tls-certs-change/1-deploy-with-tlse | BAwwCgYIKwYBBQUHAwEwDAYDVR0TAQH/BAIwADAfBgNVHSMEGDAWgBSEvgxWYqXo' - ' logger.go:42: 07:00:25 | watcher-tls-certs-change/1-deploy-with-tlse | VaasRUQLJzZAsO/HJTA4BgNVHREBAf8ELjAsgip3YXRjaGVyLWludGVybmFsLndh' - ' logger.go:42: 07:00:25 | watcher-tls-certs-change/1-deploy-with-tlse | dGNoZXIta3V0dGwtZGVmYXVsdC5zdmMwDQYJKoZIhvcNAQEMBQADggGBAJzAl8kI' - ' logger.go:42: 07:00:25 | watcher-tls-certs-change/1-deploy-with-tlse | fRCmnFBnPIq9+3XfefG0pybCCtUusBuoATJYDHmH/379Rb4zeN6bqXg7PP36qVao' - ' logger.go:42: 07:00:25 | watcher-tls-certs-change/1-deploy-with-tlse | 98lAd5LFEieOBPJq2+QXo8CWrLcbItzbOHdFeVStfpuQqaD2OMYLLepP/IpAKT+0' - ' logger.go:42: 07:00:25 | watcher-tls-certs-change/1-deploy-with-tlse | +GEHqjJsaHF3xj6g5HcvQzhSaQh3Pg8ffKmqJKQg95PF1BPQ110MawKYh9MBibPr' - ' logger.go:42: 07:00:25 | watcher-tls-certs-change/1-deploy-with-tlse | b0myrALIel2CeIeWqm+VMrMI8en/gFxJWTfg58GcQbRiUDzWa2BHE1mEoXmOLNDt' - ' logger.go:42: 07:00:25 | watcher-tls-certs-change/1-deploy-with-tlse | L1iYWnFugdfAycs2+uAkPMbpav3Oi3WJX1N1iKRklNv5Cq8oQ++b8aoR2RZPnJJE' - ' logger.go:42: 07:00:25 | watcher-tls-certs-change/1-deploy-with-tlse | ppmkEBU4Kt94NO5B/A2PnjkM8+yOQjhjVnyD5Ccp8kHx0MN7H1OW9J0m4JEPjP/c' - ' logger.go:42: 07:00:25 | watcher-tls-certs-change/1-deploy-with-tlse | uRTxWb4yoJl6Igdfy3tYgldteKztKgf4tDlkR/Htsf8GVL+zbsnTseYY5BakyygP' - ' logger.go:42: 07:00:25 | watcher-tls-certs-change/1-deploy-with-tlse | AHnxS0D1QCA5Zv5craVd+a2ZCNvbnSdl4pwShL796A0cNZGnAKDD1LwcYQ==' - ' logger.go:42: 07:00:25 | watcher-tls-certs-change/1-deploy-with-tlse | -----END CERTIFICATE-----'' ''!='' ''-----BEGIN CERTIFICATE-----' - ' logger.go:42: 07:00:25 | watcher-tls-certs-change/1-deploy-with-tlse | MIIDtzCCAh+gAwIBAgIRAMdtA66/UVErgQ6jB2Mo7g8wDQYJKoZIhvcNAQEMBQAw' - ' logger.go:42: 07:00:25 | watcher-tls-certs-change/1-deploy-with-tlse | GjEYMBYGA1UEAxMPcm9vdGNhLWludGVybmFsMB4XDTI2MDEyMjA3MDAwN1oXDTMx' - ' logger.go:42: 07:00:25 | watcher-tls-certs-change/1-deploy-with-tlse | MDEyMTA3MDAwN1owADCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAOzY' - ' logger.go:42: 07:00:25 | watcher-tls-certs-change/1-deploy-with-tlse | hUVeRLlVB60sJbdpaQAJI2vpbeEEDa91MsWoYlWF58nvtb17k8KGvpPNemDF4UvD' - ' logger.go:42: 07:00:25 | watcher-tls-certs-change/1-deploy-with-tlse | kRm+oSlbUhWwlbwI1Q3VFz4JhvU12PvdUhV4e5PbNGULUDGpAbr04w098HEQSUj+' - ' logger.go:42: 07:00:25 | watcher-tls-certs-change/1-deploy-with-tlse | D/44rQFA0PbvLa//wISJcF6NG2kZiiAFAlrfYJH5tJM2nbdDOvHRY2FScNwLw3aE' - ' logger.go:42: 07:00:25 | watcher-tls-certs-change/1-deploy-with-tlse | 1UGIp9tmMA1on0l+9tBcndNzSq8klq5TdwDo8Pwrqz8m37ZXvwD1gizzDXwZhiPN' - ' logger.go:42: 07:00:25 | watcher-tls-certs-change/1-deploy-with-tlse | KIvLKp0hAxvswvj5Rib80zGQX6JjJEcOBFZ4GT/e8WUjrsJ6JsTygAtF9Y8xwchO' - ' logger.go:42: 07:00:25 | watcher-tls-certs-change/1-deploy-with-tlse | Ce4xdKeeGZ8hDK2ojFcCAwEAAaOBkTCBjjAOBgNVHQ8BAf8EBAMCBaAwEwYDVR0l' - ' logger.go:42: 07:00:25 | watcher-tls-certs-change/1-deploy-with-tlse | BAwwCgYIKwYBBQUHAwEwDAYDVR0TAQH/BAIwADAfBgNVHSMEGDAWgBSEvgxWYqXo' - ' logger.go:42: 07:00:25 | watcher-tls-certs-change/1-deploy-with-tlse | VaasRUQLJzZAsO/HJTA4BgNVHREBAf8ELjAsgip3YXRjaGVyLWludGVybmFsLndh' - ' logger.go:42: 07:00:25 | watcher-tls-certs-change/1-deploy-with-tlse | dGNoZXIta3V0dGwtZGVmYXVsdC5zdmMwDQYJKoZIhvcNAQEMBQADggGBAJzAl8kI' - ' logger.go:42: 07:00:25 | watcher-tls-certs-change/1-deploy-with-tlse | fRCmnFBnPIq9+3XfefG0pybCCtUusBuoATJYDHmH/379Rb4zeN6bqXg7PP36qVao' - ' logger.go:42: 07:00:25 | watcher-tls-certs-change/1-deploy-with-tlse | 98lAd5LFEieOBPJq2+QXo8CWrLcbItzbOHdFeVStfpuQqaD2OMYLLepP/IpAKT+0' - ' logger.go:42: 07:00:25 | watcher-tls-certs-change/1-deploy-with-tlse | +GEHqjJsaHF3xj6g5HcvQzhSaQh3Pg8ffKmqJKQg95PF1BPQ110MawKYh9MBibPr' - ' logger.go:42: 07:00:25 | watcher-tls-certs-change/1-deploy-with-tlse | b0myrALIel2CeIeWqm+VMrMI8en/gFxJWTfg58GcQbRiUDzWa2BHE1mEoXmOLNDt' - ' logger.go:42: 07:00:25 | watcher-tls-certs-change/1-deploy-with-tlse | L1iYWnFugdfAycs2+uAkPMbpav3Oi3WJX1N1iKRklNv5Cq8oQ++b8aoR2RZPnJJE' - ' logger.go:42: 07:00:25 | watcher-tls-certs-change/1-deploy-with-tlse | ppmkEBU4Kt94NO5B/A2PnjkM8+yOQjhjVnyD5Ccp8kHx0MN7H1OW9J0m4JEPjP/c' - ' logger.go:42: 07:00:25 | watcher-tls-certs-change/1-deploy-with-tlse | uRTxWb4yoJl6Igdfy3tYgldteKztKgf4tDlkR/Htsf8GVL+zbsnTseYY5BakyygP' - ' logger.go:42: 07:00:25 | watcher-tls-certs-change/1-deploy-with-tlse | AHnxS0D1QCA5Zv5craVd+a2ZCNvbnSdl4pwShL796A0cNZGnAKDD1LwcYQ==' - ' logger.go:42: 07:00:25 | watcher-tls-certs-change/1-deploy-with-tlse | -----END CERTIFICATE-----'' '']''' - ' logger.go:42: 07:00:26 | watcher-tls-certs-change/1-deploy-with-tlse | running command: [sh -c set -euxo pipefail' - ' public_svc_cert=$(oc rsh -n $NAMESPACE -c watcher-api watcher-kuttl-api-0 cat /etc/pki/tls/certs/public.crt)' - ' public_secret_cert=$(oc get -n $NAMESPACE secret cert-watcher-public-svc -o jsonpath=''{.data.tls\.crt}'' | base64 --decode)' - ' # ensure that the svc secret and cert secret match' - ' if [ "${public_svc_cert}" != "${public_secret_cert}" ]; then' - ' exit 1' - ' fi' - ' ' - ' internal_svc_cert=$(oc rsh -n $NAMESPACE -c watcher-api watcher-kuttl-api-0 cat /etc/pki/tls/certs/internal.crt)' - ' internal_secret_cert=$(oc get -n $NAMESPACE secret cert-watcher-internal-svc -o jsonpath=''{.data.tls\.crt}'' | base64 --decode)' - ' # ensure that the svc secret and cert secret match' - ' if [ "${internal_svc_cert}" != "${internal_secret_cert}" ]; then' - ' exit 1' - ' fi' - ' ]' - ' logger.go:42: 07:00:26 | watcher-tls-certs-change/1-deploy-with-tlse | ++ oc rsh -n watcher-kuttl-default -c watcher-api watcher-kuttl-api-0 cat /etc/pki/tls/certs/public.crt' - ' logger.go:42: 07:00:26 | watcher-tls-certs-change/1-deploy-with-tlse | + public_svc_cert=''-----BEGIN CERTIFICATE-----' - ' logger.go:42: 07:00:26 | watcher-tls-certs-change/1-deploy-with-tlse | MIIDsjCCAhqgAwIBAgIQFVNmGwlW4BZq0NMjqP97cTANBgkqhkiG9w0BAQwFADAY' - ' logger.go:42: 07:00:26 | watcher-tls-certs-change/1-deploy-with-tlse | MRYwFAYDVQQDEw1yb290Y2EtcHVibGljMB4XDTI2MDEyMjA3MDAwN1oXDTMxMDEy' - ' logger.go:42: 07:00:26 | watcher-tls-certs-change/1-deploy-with-tlse | MTA3MDAwN1owADCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAMpNXlNH' - ' logger.go:42: 07:00:26 | watcher-tls-certs-change/1-deploy-with-tlse | +PgDkPuiPtY2Iy+iOVTneCp31gtcOAI8JIp8fgCvk7KwdimyxIUQ2cYxsZPnG0cy' - ' logger.go:42: 07:00:26 | watcher-tls-certs-change/1-deploy-with-tlse | wL4XpOQoqp0ElzP3DPP1FUOAnx7ctKdash5DUzBT+84oA2Hab8yd3D9RWVXgYTYP' - ' logger.go:42: 07:00:26 | watcher-tls-certs-change/1-deploy-with-tlse | tgSvfD60T1dJeiiJa5kBj0Gfj+0gKtYNueYrOo65hlC6Smd2tP/7av+xLtQ2b1JS' - ' logger.go:42: 07:00:26 | watcher-tls-certs-change/1-deploy-with-tlse | tFAWNznSY+2n+8SyIH6jjDb/oBYfANPOIcfU/4X8dTGzLgkpMMXK3KyXfzgD9EW+' - ' logger.go:42: 07:00:26 | watcher-tls-certs-change/1-deploy-with-tlse | HexV+da9g5Ix/I5gxku5bd0DWz/ovW+1o/uYMgB5ss9oqd8YKKSqkptXfAtoGToA' - ' logger.go:42: 07:00:26 | watcher-tls-certs-change/1-deploy-with-tlse | KG8QccenVwR7kEcCAwEAAaOBjzCBjDAOBgNVHQ8BAf8EBAMCBaAwEwYDVR0lBAww' - ' logger.go:42: 07:00:26 | watcher-tls-certs-change/1-deploy-with-tlse | CgYIKwYBBQUHAwEwDAYDVR0TAQH/BAIwADAfBgNVHSMEGDAWgBSy3xzAKuvmrfpX' - ' logger.go:42: 07:00:26 | watcher-tls-certs-change/1-deploy-with-tlse | EgmGJVm/ieLg8TA2BgNVHREBAf8ELDAqgih3YXRjaGVyLXB1YmxpYy53YXRjaGVy' - ' logger.go:42: 07:00:26 | watcher-tls-certs-change/1-deploy-with-tlse | LWt1dHRsLWRlZmF1bHQuc3ZjMA0GCSqGSIb3DQEBDAUAA4IBgQBmitaOVryjB1Fy' - ' logger.go:42: 07:00:26 | watcher-tls-certs-change/1-deploy-with-tlse | o+uQ7q1CfFEXu4uk57VffzLi6GFAot2u9gqfKm5Bx/JFKe0HvUHxfyum8oGhg8Mm' - ' logger.go:42: 07:00:26 | watcher-tls-certs-change/1-deploy-with-tlse | lF7bosLeepyWZeFYbXLZ3eCDrTyQRPPc8j6O8R9Ygg2wvIhwIrADK1/KTQuPVFW5' - ' logger.go:42: 07:00:26 | watcher-tls-certs-change/1-deploy-with-tlse | DVaC7MERErbmczp6DYiELdgcqlJimK3Vp/11qNtk1Kc7wVLI2Y36CzhwVMXOBW00' - ' logger.go:42: 07:00:26 | watcher-tls-certs-change/1-deploy-with-tlse | XXQxvYoWLH8WjhVMN9L0ZfTLdQIwL4gt0B3R7Uh8HR/qMCwW4RhIByG/el6wH1yh' - ' logger.go:42: 07:00:26 | watcher-tls-certs-change/1-deploy-with-tlse | D5gkY2Jkbf1vvmmw6MBLkx5O8aL145sW8fDKMGrmRj+eO5ALLeYMdmQZXqwZ4URw' - ' logger.go:42: 07:00:26 | watcher-tls-certs-change/1-deploy-with-tlse | X1tyjzN1ntrcBJf6xnuMLfT1v9RPMCT4CPeb23M5lqvwQXdrbiarkLQ9n/mBPUA6' - ' logger.go:42: 07:00:26 | watcher-tls-certs-change/1-deploy-with-tlse | lUYWBkFqYg9XnAgkWTeehQ7+AeOtvhj6gu3st6nEkOjJs9ri3Jris8kGL9Ay5yMy' - ' logger.go:42: 07:00:26 | watcher-tls-certs-change/1-deploy-with-tlse | af6ByIVzqJuVxr9GJiFIvhx+t2NEelQUiFnN1465Q67x/AELrfo=' - ' logger.go:42: 07:00:26 | watcher-tls-certs-change/1-deploy-with-tlse | -----END CERTIFICATE-----''' - ' logger.go:42: 07:00:26 | watcher-tls-certs-change/1-deploy-with-tlse | ++ oc get -n watcher-kuttl-default secret cert-watcher-public-svc -o ''jsonpath={.data.tls\.crt}''' - ' logger.go:42: 07:00:26 | watcher-tls-certs-change/1-deploy-with-tlse | ++ base64 --decode' - ' logger.go:42: 07:00:27 | watcher-tls-certs-change/1-deploy-with-tlse | + public_secret_cert=''-----BEGIN CERTIFICATE-----' - ' logger.go:42: 07:00:27 | watcher-tls-certs-change/1-deploy-with-tlse | MIIDsjCCAhqgAwIBAgIQFVNmGwlW4BZq0NMjqP97cTANBgkqhkiG9w0BAQwFADAY' - ' logger.go:42: 07:00:27 | watcher-tls-certs-change/1-deploy-with-tlse | MRYwFAYDVQQDEw1yb290Y2EtcHVibGljMB4XDTI2MDEyMjA3MDAwN1oXDTMxMDEy' - ' logger.go:42: 07:00:27 | watcher-tls-certs-change/1-deploy-with-tlse | MTA3MDAwN1owADCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAMpNXlNH' - ' logger.go:42: 07:00:27 | watcher-tls-certs-change/1-deploy-with-tlse | +PgDkPuiPtY2Iy+iOVTneCp31gtcOAI8JIp8fgCvk7KwdimyxIUQ2cYxsZPnG0cy' - ' logger.go:42: 07:00:27 | watcher-tls-certs-change/1-deploy-with-tlse | wL4XpOQoqp0ElzP3DPP1FUOAnx7ctKdash5DUzBT+84oA2Hab8yd3D9RWVXgYTYP' - ' logger.go:42: 07:00:27 | watcher-tls-certs-change/1-deploy-with-tlse | tgSvfD60T1dJeiiJa5kBj0Gfj+0gKtYNueYrOo65hlC6Smd2tP/7av+xLtQ2b1JS' - ' logger.go:42: 07:00:27 | watcher-tls-certs-change/1-deploy-with-tlse | tFAWNznSY+2n+8SyIH6jjDb/oBYfANPOIcfU/4X8dTGzLgkpMMXK3KyXfzgD9EW+' - ' logger.go:42: 07:00:27 | watcher-tls-certs-change/1-deploy-with-tlse | HexV+da9g5Ix/I5gxku5bd0DWz/ovW+1o/uYMgB5ss9oqd8YKKSqkptXfAtoGToA' - ' logger.go:42: 07:00:27 | watcher-tls-certs-change/1-deploy-with-tlse | KG8QccenVwR7kEcCAwEAAaOBjzCBjDAOBgNVHQ8BAf8EBAMCBaAwEwYDVR0lBAww' - ' logger.go:42: 07:00:27 | watcher-tls-certs-change/1-deploy-with-tlse | CgYIKwYBBQUHAwEwDAYDVR0TAQH/BAIwADAfBgNVHSMEGDAWgBSy3xzAKuvmrfpX' - ' logger.go:42: 07:00:27 | watcher-tls-certs-change/1-deploy-with-tlse | EgmGJVm/ieLg8TA2BgNVHREBAf8ELDAqgih3YXRjaGVyLXB1YmxpYy53YXRjaGVy' - ' logger.go:42: 07:00:27 | watcher-tls-certs-change/1-deploy-with-tlse | LWt1dHRsLWRlZmF1bHQuc3ZjMA0GCSqGSIb3DQEBDAUAA4IBgQBmitaOVryjB1Fy' - ' logger.go:42: 07:00:27 | watcher-tls-certs-change/1-deploy-with-tlse | o+uQ7q1CfFEXu4uk57VffzLi6GFAot2u9gqfKm5Bx/JFKe0HvUHxfyum8oGhg8Mm' - ' logger.go:42: 07:00:27 | watcher-tls-certs-change/1-deploy-with-tlse | lF7bosLeepyWZeFYbXLZ3eCDrTyQRPPc8j6O8R9Ygg2wvIhwIrADK1/KTQuPVFW5' - ' logger.go:42: 07:00:27 | watcher-tls-certs-change/1-deploy-with-tlse | DVaC7MERErbmczp6DYiELdgcqlJimK3Vp/11qNtk1Kc7wVLI2Y36CzhwVMXOBW00' - ' logger.go:42: 07:00:27 | watcher-tls-certs-change/1-deploy-with-tlse | XXQxvYoWLH8WjhVMN9L0ZfTLdQIwL4gt0B3R7Uh8HR/qMCwW4RhIByG/el6wH1yh' - ' logger.go:42: 07:00:27 | watcher-tls-certs-change/1-deploy-with-tlse | D5gkY2Jkbf1vvmmw6MBLkx5O8aL145sW8fDKMGrmRj+eO5ALLeYMdmQZXqwZ4URw' - ' logger.go:42: 07:00:27 | watcher-tls-certs-change/1-deploy-with-tlse | X1tyjzN1ntrcBJf6xnuMLfT1v9RPMCT4CPeb23M5lqvwQXdrbiarkLQ9n/mBPUA6' - ' logger.go:42: 07:00:27 | watcher-tls-certs-change/1-deploy-with-tlse | lUYWBkFqYg9XnAgkWTeehQ7+AeOtvhj6gu3st6nEkOjJs9ri3Jris8kGL9Ay5yMy' - ' logger.go:42: 07:00:27 | watcher-tls-certs-change/1-deploy-with-tlse | af6ByIVzqJuVxr9GJiFIvhx+t2NEelQUiFnN1465Q67x/AELrfo=' - ' logger.go:42: 07:00:27 | watcher-tls-certs-change/1-deploy-with-tlse | -----END CERTIFICATE-----''' - ' logger.go:42: 07:00:27 | watcher-tls-certs-change/1-deploy-with-tlse | + ''['' ''-----BEGIN CERTIFICATE-----' - ' logger.go:42: 07:00:27 | watcher-tls-certs-change/1-deploy-with-tlse | MIIDsjCCAhqgAwIBAgIQFVNmGwlW4BZq0NMjqP97cTANBgkqhkiG9w0BAQwFADAY' - ' logger.go:42: 07:00:27 | watcher-tls-certs-change/1-deploy-with-tlse | MRYwFAYDVQQDEw1yb290Y2EtcHVibGljMB4XDTI2MDEyMjA3MDAwN1oXDTMxMDEy' - ' logger.go:42: 07:00:27 | watcher-tls-certs-change/1-deploy-with-tlse | MTA3MDAwN1owADCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAMpNXlNH' - ' logger.go:42: 07:00:27 | watcher-tls-certs-change/1-deploy-with-tlse | +PgDkPuiPtY2Iy+iOVTneCp31gtcOAI8JIp8fgCvk7KwdimyxIUQ2cYxsZPnG0cy' - ' logger.go:42: 07:00:27 | watcher-tls-certs-change/1-deploy-with-tlse | wL4XpOQoqp0ElzP3DPP1FUOAnx7ctKdash5DUzBT+84oA2Hab8yd3D9RWVXgYTYP' - ' logger.go:42: 07:00:27 | watcher-tls-certs-change/1-deploy-with-tlse | tgSvfD60T1dJeiiJa5kBj0Gfj+0gKtYNueYrOo65hlC6Smd2tP/7av+xLtQ2b1JS' - ' logger.go:42: 07:00:27 | watcher-tls-certs-change/1-deploy-with-tlse | tFAWNznSY+2n+8SyIH6jjDb/oBYfANPOIcfU/4X8dTGzLgkpMMXK3KyXfzgD9EW+' - ' logger.go:42: 07:00:27 | watcher-tls-certs-change/1-deploy-with-tlse | HexV+da9g5Ix/I5gxku5bd0DWz/ovW+1o/uYMgB5ss9oqd8YKKSqkptXfAtoGToA' - ' logger.go:42: 07:00:27 | watcher-tls-certs-change/1-deploy-with-tlse | KG8QccenVwR7kEcCAwEAAaOBjzCBjDAOBgNVHQ8BAf8EBAMCBaAwEwYDVR0lBAww' - ' logger.go:42: 07:00:27 | watcher-tls-certs-change/1-deploy-with-tlse | CgYIKwYBBQUHAwEwDAYDVR0TAQH/BAIwADAfBgNVHSMEGDAWgBSy3xzAKuvmrfpX' - ' logger.go:42: 07:00:27 | watcher-tls-certs-change/1-deploy-with-tlse | EgmGJVm/ieLg8TA2BgNVHREBAf8ELDAqgih3YXRjaGVyLXB1YmxpYy53YXRjaGVy' - ' logger.go:42: 07:00:27 | watcher-tls-certs-change/1-deploy-with-tlse | LWt1dHRsLWRlZmF1bHQuc3ZjMA0GCSqGSIb3DQEBDAUAA4IBgQBmitaOVryjB1Fy' - ' logger.go:42: 07:00:27 | watcher-tls-certs-change/1-deploy-with-tlse | o+uQ7q1CfFEXu4uk57VffzLi6GFAot2u9gqfKm5Bx/JFKe0HvUHxfyum8oGhg8Mm' - ' logger.go:42: 07:00:27 | watcher-tls-certs-change/1-deploy-with-tlse | lF7bosLeepyWZeFYbXLZ3eCDrTyQRPPc8j6O8R9Ygg2wvIhwIrADK1/KTQuPVFW5' - ' logger.go:42: 07:00:27 | watcher-tls-certs-change/1-deploy-with-tlse | DVaC7MERErbmczp6DYiELdgcqlJimK3Vp/11qNtk1Kc7wVLI2Y36CzhwVMXOBW00' - ' logger.go:42: 07:00:27 | watcher-tls-certs-change/1-deploy-with-tlse | XXQxvYoWLH8WjhVMN9L0ZfTLdQIwL4gt0B3R7Uh8HR/qMCwW4RhIByG/el6wH1yh' - ' logger.go:42: 07:00:27 | watcher-tls-certs-change/1-deploy-with-tlse | D5gkY2Jkbf1vvmmw6MBLkx5O8aL145sW8fDKMGrmRj+eO5ALLeYMdmQZXqwZ4URw' - ' logger.go:42: 07:00:27 | watcher-tls-certs-change/1-deploy-with-tlse | X1tyjzN1ntrcBJf6xnuMLfT1v9RPMCT4CPeb23M5lqvwQXdrbiarkLQ9n/mBPUA6' - ' logger.go:42: 07:00:27 | watcher-tls-certs-change/1-deploy-with-tlse | lUYWBkFqYg9XnAgkWTeehQ7+AeOtvhj6gu3st6nEkOjJs9ri3Jris8kGL9Ay5yMy' - ' logger.go:42: 07:00:27 | watcher-tls-certs-change/1-deploy-with-tlse | af6ByIVzqJuVxr9GJiFIvhx+t2NEelQUiFnN1465Q67x/AELrfo=' - ' logger.go:42: 07:00:27 | watcher-tls-certs-change/1-deploy-with-tlse | -----END CERTIFICATE-----'' ''!='' ''-----BEGIN CERTIFICATE-----' - ' logger.go:42: 07:00:27 | watcher-tls-certs-change/1-deploy-with-tlse | MIIDsjCCAhqgAwIBAgIQFVNmGwlW4BZq0NMjqP97cTANBgkqhkiG9w0BAQwFADAY' - ' logger.go:42: 07:00:27 | watcher-tls-certs-change/1-deploy-with-tlse | MRYwFAYDVQQDEw1yb290Y2EtcHVibGljMB4XDTI2MDEyMjA3MDAwN1oXDTMxMDEy' - ' logger.go:42: 07:00:27 | watcher-tls-certs-change/1-deploy-with-tlse | MTA3MDAwN1owADCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAMpNXlNH' - ' logger.go:42: 07:00:27 | watcher-tls-certs-change/1-deploy-with-tlse | +PgDkPuiPtY2Iy+iOVTneCp31gtcOAI8JIp8fgCvk7KwdimyxIUQ2cYxsZPnG0cy' - ' logger.go:42: 07:00:27 | watcher-tls-certs-change/1-deploy-with-tlse | wL4XpOQoqp0ElzP3DPP1FUOAnx7ctKdash5DUzBT+84oA2Hab8yd3D9RWVXgYTYP' - ' logger.go:42: 07:00:27 | watcher-tls-certs-change/1-deploy-with-tlse | tgSvfD60T1dJeiiJa5kBj0Gfj+0gKtYNueYrOo65hlC6Smd2tP/7av+xLtQ2b1JS' - ' logger.go:42: 07:00:27 | watcher-tls-certs-change/1-deploy-with-tlse | tFAWNznSY+2n+8SyIH6jjDb/oBYfANPOIcfU/4X8dTGzLgkpMMXK3KyXfzgD9EW+' - ' logger.go:42: 07:00:27 | watcher-tls-certs-change/1-deploy-with-tlse | HexV+da9g5Ix/I5gxku5bd0DWz/ovW+1o/uYMgB5ss9oqd8YKKSqkptXfAtoGToA' - ' logger.go:42: 07:00:27 | watcher-tls-certs-change/1-deploy-with-tlse | KG8QccenVwR7kEcCAwEAAaOBjzCBjDAOBgNVHQ8BAf8EBAMCBaAwEwYDVR0lBAww' - ' logger.go:42: 07:00:27 | watcher-tls-certs-change/1-deploy-with-tlse | CgYIKwYBBQUHAwEwDAYDVR0TAQH/BAIwADAfBgNVHSMEGDAWgBSy3xzAKuvmrfpX' - ' logger.go:42: 07:00:27 | watcher-tls-certs-change/1-deploy-with-tlse | EgmGJVm/ieLg8TA2BgNVHREBAf8ELDAqgih3YXRjaGVyLXB1YmxpYy53YXRjaGVy' - ' logger.go:42: 07:00:27 | watcher-tls-certs-change/1-deploy-with-tlse | LWt1dHRsLWRlZmF1bHQuc3ZjMA0GCSqGSIb3DQEBDAUAA4IBgQBmitaOVryjB1Fy' - ' logger.go:42: 07:00:27 | watcher-tls-certs-change/1-deploy-with-tlse | o+uQ7q1CfFEXu4uk57VffzLi6GFAot2u9gqfKm5Bx/JFKe0HvUHxfyum8oGhg8Mm' - ' logger.go:42: 07:00:27 | watcher-tls-certs-change/1-deploy-with-tlse | lF7bosLeepyWZeFYbXLZ3eCDrTyQRPPc8j6O8R9Ygg2wvIhwIrADK1/KTQuPVFW5' - ' logger.go:42: 07:00:27 | watcher-tls-certs-change/1-deploy-with-tlse | DVaC7MERErbmczp6DYiELdgcqlJimK3Vp/11qNtk1Kc7wVLI2Y36CzhwVMXOBW00' - ' logger.go:42: 07:00:27 | watcher-tls-certs-change/1-deploy-with-tlse | XXQxvYoWLH8WjhVMN9L0ZfTLdQIwL4gt0B3R7Uh8HR/qMCwW4RhIByG/el6wH1yh' - ' logger.go:42: 07:00:27 | watcher-tls-certs-change/1-deploy-with-tlse | D5gkY2Jkbf1vvmmw6MBLkx5O8aL145sW8fDKMGrmRj+eO5ALLeYMdmQZXqwZ4URw' - ' logger.go:42: 07:00:27 | watcher-tls-certs-change/1-deploy-with-tlse | X1tyjzN1ntrcBJf6xnuMLfT1v9RPMCT4CPeb23M5lqvwQXdrbiarkLQ9n/mBPUA6' - ' logger.go:42: 07:00:27 | watcher-tls-certs-change/1-deploy-with-tlse | lUYWBkFqYg9XnAgkWTeehQ7+AeOtvhj6gu3st6nEkOjJs9ri3Jris8kGL9Ay5yMy' - ' logger.go:42: 07:00:27 | watcher-tls-certs-change/1-deploy-with-tlse | af6ByIVzqJuVxr9GJiFIvhx+t2NEelQUiFnN1465Q67x/AELrfo=' - ' logger.go:42: 07:00:27 | watcher-tls-certs-change/1-deploy-with-tlse | -----END CERTIFICATE-----'' '']''' - ' logger.go:42: 07:00:27 | watcher-tls-certs-change/1-deploy-with-tlse | ++ oc rsh -n watcher-kuttl-default -c watcher-api watcher-kuttl-api-0 cat /etc/pki/tls/certs/internal.crt' - ' logger.go:42: 07:00:27 | watcher-tls-certs-change/1-deploy-with-tlse | + internal_svc_cert=''-----BEGIN CERTIFICATE-----' - ' logger.go:42: 07:00:27 | watcher-tls-certs-change/1-deploy-with-tlse | MIIDtzCCAh+gAwIBAgIRAMdtA66/UVErgQ6jB2Mo7g8wDQYJKoZIhvcNAQEMBQAw' - ' logger.go:42: 07:00:27 | watcher-tls-certs-change/1-deploy-with-tlse | GjEYMBYGA1UEAxMPcm9vdGNhLWludGVybmFsMB4XDTI2MDEyMjA3MDAwN1oXDTMx' - ' logger.go:42: 07:00:27 | watcher-tls-certs-change/1-deploy-with-tlse | MDEyMTA3MDAwN1owADCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAOzY' - ' logger.go:42: 07:00:27 | watcher-tls-certs-change/1-deploy-with-tlse | hUVeRLlVB60sJbdpaQAJI2vpbeEEDa91MsWoYlWF58nvtb17k8KGvpPNemDF4UvD' - ' logger.go:42: 07:00:27 | watcher-tls-certs-change/1-deploy-with-tlse | kRm+oSlbUhWwlbwI1Q3VFz4JhvU12PvdUhV4e5PbNGULUDGpAbr04w098HEQSUj+' - ' logger.go:42: 07:00:27 | watcher-tls-certs-change/1-deploy-with-tlse | D/44rQFA0PbvLa//wISJcF6NG2kZiiAFAlrfYJH5tJM2nbdDOvHRY2FScNwLw3aE' - ' logger.go:42: 07:00:27 | watcher-tls-certs-change/1-deploy-with-tlse | 1UGIp9tmMA1on0l+9tBcndNzSq8klq5TdwDo8Pwrqz8m37ZXvwD1gizzDXwZhiPN' - ' logger.go:42: 07:00:27 | watcher-tls-certs-change/1-deploy-with-tlse | KIvLKp0hAxvswvj5Rib80zGQX6JjJEcOBFZ4GT/e8WUjrsJ6JsTygAtF9Y8xwchO' - ' logger.go:42: 07:00:27 | watcher-tls-certs-change/1-deploy-with-tlse | Ce4xdKeeGZ8hDK2ojFcCAwEAAaOBkTCBjjAOBgNVHQ8BAf8EBAMCBaAwEwYDVR0l' - ' logger.go:42: 07:00:27 | watcher-tls-certs-change/1-deploy-with-tlse | BAwwCgYIKwYBBQUHAwEwDAYDVR0TAQH/BAIwADAfBgNVHSMEGDAWgBSEvgxWYqXo' - ' logger.go:42: 07:00:27 | watcher-tls-certs-change/1-deploy-with-tlse | VaasRUQLJzZAsO/HJTA4BgNVHREBAf8ELjAsgip3YXRjaGVyLWludGVybmFsLndh' - ' logger.go:42: 07:00:27 | watcher-tls-certs-change/1-deploy-with-tlse | dGNoZXIta3V0dGwtZGVmYXVsdC5zdmMwDQYJKoZIhvcNAQEMBQADggGBAJzAl8kI' - ' logger.go:42: 07:00:27 | watcher-tls-certs-change/1-deploy-with-tlse | fRCmnFBnPIq9+3XfefG0pybCCtUusBuoATJYDHmH/379Rb4zeN6bqXg7PP36qVao' - ' logger.go:42: 07:00:27 | watcher-tls-certs-change/1-deploy-with-tlse | 98lAd5LFEieOBPJq2+QXo8CWrLcbItzbOHdFeVStfpuQqaD2OMYLLepP/IpAKT+0' - ' logger.go:42: 07:00:27 | watcher-tls-certs-change/1-deploy-with-tlse | +GEHqjJsaHF3xj6g5HcvQzhSaQh3Pg8ffKmqJKQg95PF1BPQ110MawKYh9MBibPr' - ' logger.go:42: 07:00:27 | watcher-tls-certs-change/1-deploy-with-tlse | b0myrALIel2CeIeWqm+VMrMI8en/gFxJWTfg58GcQbRiUDzWa2BHE1mEoXmOLNDt' - ' logger.go:42: 07:00:27 | watcher-tls-certs-change/1-deploy-with-tlse | L1iYWnFugdfAycs2+uAkPMbpav3Oi3WJX1N1iKRklNv5Cq8oQ++b8aoR2RZPnJJE' - ' logger.go:42: 07:00:27 | watcher-tls-certs-change/1-deploy-with-tlse | ppmkEBU4Kt94NO5B/A2PnjkM8+yOQjhjVnyD5Ccp8kHx0MN7H1OW9J0m4JEPjP/c' - ' logger.go:42: 07:00:27 | watcher-tls-certs-change/1-deploy-with-tlse | uRTxWb4yoJl6Igdfy3tYgldteKztKgf4tDlkR/Htsf8GVL+zbsnTseYY5BakyygP' - ' logger.go:42: 07:00:27 | watcher-tls-certs-change/1-deploy-with-tlse | AHnxS0D1QCA5Zv5craVd+a2ZCNvbnSdl4pwShL796A0cNZGnAKDD1LwcYQ==' - ' logger.go:42: 07:00:27 | watcher-tls-certs-change/1-deploy-with-tlse | -----END CERTIFICATE-----''' - ' logger.go:42: 07:00:27 | watcher-tls-certs-change/1-deploy-with-tlse | ++ oc get -n watcher-kuttl-default secret cert-watcher-internal-svc -o ''jsonpath={.data.tls\.crt}''' - ' logger.go:42: 07:00:27 | watcher-tls-certs-change/1-deploy-with-tlse | ++ base64 --decode' - ' logger.go:42: 07:00:27 | watcher-tls-certs-change/1-deploy-with-tlse | + internal_secret_cert=''-----BEGIN CERTIFICATE-----' - ' logger.go:42: 07:00:27 | watcher-tls-certs-change/1-deploy-with-tlse | MIIDtzCCAh+gAwIBAgIRAMdtA66/UVErgQ6jB2Mo7g8wDQYJKoZIhvcNAQEMBQAw' - ' logger.go:42: 07:00:27 | watcher-tls-certs-change/1-deploy-with-tlse | GjEYMBYGA1UEAxMPcm9vdGNhLWludGVybmFsMB4XDTI2MDEyMjA3MDAwN1oXDTMx' - ' logger.go:42: 07:00:27 | watcher-tls-certs-change/1-deploy-with-tlse | MDEyMTA3MDAwN1owADCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAOzY' - ' logger.go:42: 07:00:27 | watcher-tls-certs-change/1-deploy-with-tlse | hUVeRLlVB60sJbdpaQAJI2vpbeEEDa91MsWoYlWF58nvtb17k8KGvpPNemDF4UvD' - ' logger.go:42: 07:00:27 | watcher-tls-certs-change/1-deploy-with-tlse | kRm+oSlbUhWwlbwI1Q3VFz4JhvU12PvdUhV4e5PbNGULUDGpAbr04w098HEQSUj+' - ' logger.go:42: 07:00:27 | watcher-tls-certs-change/1-deploy-with-tlse | D/44rQFA0PbvLa//wISJcF6NG2kZiiAFAlrfYJH5tJM2nbdDOvHRY2FScNwLw3aE' - ' logger.go:42: 07:00:27 | watcher-tls-certs-change/1-deploy-with-tlse | 1UGIp9tmMA1on0l+9tBcndNzSq8klq5TdwDo8Pwrqz8m37ZXvwD1gizzDXwZhiPN' - ' logger.go:42: 07:00:27 | watcher-tls-certs-change/1-deploy-with-tlse | KIvLKp0hAxvswvj5Rib80zGQX6JjJEcOBFZ4GT/e8WUjrsJ6JsTygAtF9Y8xwchO' - ' logger.go:42: 07:00:27 | watcher-tls-certs-change/1-deploy-with-tlse | Ce4xdKeeGZ8hDK2ojFcCAwEAAaOBkTCBjjAOBgNVHQ8BAf8EBAMCBaAwEwYDVR0l' - ' logger.go:42: 07:00:27 | watcher-tls-certs-change/1-deploy-with-tlse | BAwwCgYIKwYBBQUHAwEwDAYDVR0TAQH/BAIwADAfBgNVHSMEGDAWgBSEvgxWYqXo' - ' logger.go:42: 07:00:27 | watcher-tls-certs-change/1-deploy-with-tlse | VaasRUQLJzZAsO/HJTA4BgNVHREBAf8ELjAsgip3YXRjaGVyLWludGVybmFsLndh' - ' logger.go:42: 07:00:27 | watcher-tls-certs-change/1-deploy-with-tlse | dGNoZXIta3V0dGwtZGVmYXVsdC5zdmMwDQYJKoZIhvcNAQEMBQADggGBAJzAl8kI' - ' logger.go:42: 07:00:27 | watcher-tls-certs-change/1-deploy-with-tlse | fRCmnFBnPIq9+3XfefG0pybCCtUusBuoATJYDHmH/379Rb4zeN6bqXg7PP36qVao' - ' logger.go:42: 07:00:27 | watcher-tls-certs-change/1-deploy-with-tlse | 98lAd5LFEieOBPJq2+QXo8CWrLcbItzbOHdFeVStfpuQqaD2OMYLLepP/IpAKT+0' - ' logger.go:42: 07:00:27 | watcher-tls-certs-change/1-deploy-with-tlse | +GEHqjJsaHF3xj6g5HcvQzhSaQh3Pg8ffKmqJKQg95PF1BPQ110MawKYh9MBibPr' - ' logger.go:42: 07:00:27 | watcher-tls-certs-change/1-deploy-with-tlse | b0myrALIel2CeIeWqm+VMrMI8en/gFxJWTfg58GcQbRiUDzWa2BHE1mEoXmOLNDt' - ' logger.go:42: 07:00:27 | watcher-tls-certs-change/1-deploy-with-tlse | L1iYWnFugdfAycs2+uAkPMbpav3Oi3WJX1N1iKRklNv5Cq8oQ++b8aoR2RZPnJJE' - ' logger.go:42: 07:00:27 | watcher-tls-certs-change/1-deploy-with-tlse | ppmkEBU4Kt94NO5B/A2PnjkM8+yOQjhjVnyD5Ccp8kHx0MN7H1OW9J0m4JEPjP/c' - ' logger.go:42: 07:00:27 | watcher-tls-certs-change/1-deploy-with-tlse | uRTxWb4yoJl6Igdfy3tYgldteKztKgf4tDlkR/Htsf8GVL+zbsnTseYY5BakyygP' - ' logger.go:42: 07:00:27 | watcher-tls-certs-change/1-deploy-with-tlse | AHnxS0D1QCA5Zv5craVd+a2ZCNvbnSdl4pwShL796A0cNZGnAKDD1LwcYQ==' - ' logger.go:42: 07:00:27 | watcher-tls-certs-change/1-deploy-with-tlse | -----END CERTIFICATE-----''' - ' logger.go:42: 07:00:27 | watcher-tls-certs-change/1-deploy-with-tlse | + ''['' ''-----BEGIN CERTIFICATE-----' - ' logger.go:42: 07:00:27 | watcher-tls-certs-change/1-deploy-with-tlse | MIIDtzCCAh+gAwIBAgIRAMdtA66/UVErgQ6jB2Mo7g8wDQYJKoZIhvcNAQEMBQAw' - ' logger.go:42: 07:00:27 | watcher-tls-certs-change/1-deploy-with-tlse | GjEYMBYGA1UEAxMPcm9vdGNhLWludGVybmFsMB4XDTI2MDEyMjA3MDAwN1oXDTMx' - ' logger.go:42: 07:00:27 | watcher-tls-certs-change/1-deploy-with-tlse | MDEyMTA3MDAwN1owADCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAOzY' - ' logger.go:42: 07:00:27 | watcher-tls-certs-change/1-deploy-with-tlse | hUVeRLlVB60sJbdpaQAJI2vpbeEEDa91MsWoYlWF58nvtb17k8KGvpPNemDF4UvD' - ' logger.go:42: 07:00:27 | watcher-tls-certs-change/1-deploy-with-tlse | kRm+oSlbUhWwlbwI1Q3VFz4JhvU12PvdUhV4e5PbNGULUDGpAbr04w098HEQSUj+' - ' logger.go:42: 07:00:27 | watcher-tls-certs-change/1-deploy-with-tlse | D/44rQFA0PbvLa//wISJcF6NG2kZiiAFAlrfYJH5tJM2nbdDOvHRY2FScNwLw3aE' - ' logger.go:42: 07:00:27 | watcher-tls-certs-change/1-deploy-with-tlse | 1UGIp9tmMA1on0l+9tBcndNzSq8klq5TdwDo8Pwrqz8m37ZXvwD1gizzDXwZhiPN' - ' logger.go:42: 07:00:27 | watcher-tls-certs-change/1-deploy-with-tlse | KIvLKp0hAxvswvj5Rib80zGQX6JjJEcOBFZ4GT/e8WUjrsJ6JsTygAtF9Y8xwchO' - ' logger.go:42: 07:00:27 | watcher-tls-certs-change/1-deploy-with-tlse | Ce4xdKeeGZ8hDK2ojFcCAwEAAaOBkTCBjjAOBgNVHQ8BAf8EBAMCBaAwEwYDVR0l' - ' logger.go:42: 07:00:27 | watcher-tls-certs-change/1-deploy-with-tlse | BAwwCgYIKwYBBQUHAwEwDAYDVR0TAQH/BAIwADAfBgNVHSMEGDAWgBSEvgxWYqXo' - ' logger.go:42: 07:00:27 | watcher-tls-certs-change/1-deploy-with-tlse | VaasRUQLJzZAsO/HJTA4BgNVHREBAf8ELjAsgip3YXRjaGVyLWludGVybmFsLndh' - ' logger.go:42: 07:00:27 | watcher-tls-certs-change/1-deploy-with-tlse | dGNoZXIta3V0dGwtZGVmYXVsdC5zdmMwDQYJKoZIhvcNAQEMBQADggGBAJzAl8kI' - ' logger.go:42: 07:00:27 | watcher-tls-certs-change/1-deploy-with-tlse | fRCmnFBnPIq9+3XfefG0pybCCtUusBuoATJYDHmH/379Rb4zeN6bqXg7PP36qVao' - ' logger.go:42: 07:00:27 | watcher-tls-certs-change/1-deploy-with-tlse | 98lAd5LFEieOBPJq2+QXo8CWrLcbItzbOHdFeVStfpuQqaD2OMYLLepP/IpAKT+0' - ' logger.go:42: 07:00:27 | watcher-tls-certs-change/1-deploy-with-tlse | +GEHqjJsaHF3xj6g5HcvQzhSaQh3Pg8ffKmqJKQg95PF1BPQ110MawKYh9MBibPr' - ' logger.go:42: 07:00:27 | watcher-tls-certs-change/1-deploy-with-tlse | b0myrALIel2CeIeWqm+VMrMI8en/gFxJWTfg58GcQbRiUDzWa2BHE1mEoXmOLNDt' - ' logger.go:42: 07:00:27 | watcher-tls-certs-change/1-deploy-with-tlse | L1iYWnFugdfAycs2+uAkPMbpav3Oi3WJX1N1iKRklNv5Cq8oQ++b8aoR2RZPnJJE' - ' logger.go:42: 07:00:27 | watcher-tls-certs-change/1-deploy-with-tlse | ppmkEBU4Kt94NO5B/A2PnjkM8+yOQjhjVnyD5Ccp8kHx0MN7H1OW9J0m4JEPjP/c' - ' logger.go:42: 07:00:27 | watcher-tls-certs-change/1-deploy-with-tlse | uRTxWb4yoJl6Igdfy3tYgldteKztKgf4tDlkR/Htsf8GVL+zbsnTseYY5BakyygP' - ' logger.go:42: 07:00:27 | watcher-tls-certs-change/1-deploy-with-tlse | AHnxS0D1QCA5Zv5craVd+a2ZCNvbnSdl4pwShL796A0cNZGnAKDD1LwcYQ==' - ' logger.go:42: 07:00:27 | watcher-tls-certs-change/1-deploy-with-tlse | -----END CERTIFICATE-----'' ''!='' ''-----BEGIN CERTIFICATE-----' - ' logger.go:42: 07:00:27 | watcher-tls-certs-change/1-deploy-with-tlse | MIIDtzCCAh+gAwIBAgIRAMdtA66/UVErgQ6jB2Mo7g8wDQYJKoZIhvcNAQEMBQAw' - ' logger.go:42: 07:00:27 | watcher-tls-certs-change/1-deploy-with-tlse | GjEYMBYGA1UEAxMPcm9vdGNhLWludGVybmFsMB4XDTI2MDEyMjA3MDAwN1oXDTMx' - ' logger.go:42: 07:00:27 | watcher-tls-certs-change/1-deploy-with-tlse | MDEyMTA3MDAwN1owADCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAOzY' - ' logger.go:42: 07:00:27 | watcher-tls-certs-change/1-deploy-with-tlse | hUVeRLlVB60sJbdpaQAJI2vpbeEEDa91MsWoYlWF58nvtb17k8KGvpPNemDF4UvD' - ' logger.go:42: 07:00:27 | watcher-tls-certs-change/1-deploy-with-tlse | kRm+oSlbUhWwlbwI1Q3VFz4JhvU12PvdUhV4e5PbNGULUDGpAbr04w098HEQSUj+' - ' logger.go:42: 07:00:27 | watcher-tls-certs-change/1-deploy-with-tlse | D/44rQFA0PbvLa//wISJcF6NG2kZiiAFAlrfYJH5tJM2nbdDOvHRY2FScNwLw3aE' - ' logger.go:42: 07:00:27 | watcher-tls-certs-change/1-deploy-with-tlse | 1UGIp9tmMA1on0l+9tBcndNzSq8klq5TdwDo8Pwrqz8m37ZXvwD1gizzDXwZhiPN' - ' logger.go:42: 07:00:27 | watcher-tls-certs-change/1-deploy-with-tlse | KIvLKp0hAxvswvj5Rib80zGQX6JjJEcOBFZ4GT/e8WUjrsJ6JsTygAtF9Y8xwchO' - ' logger.go:42: 07:00:27 | watcher-tls-certs-change/1-deploy-with-tlse | Ce4xdKeeGZ8hDK2ojFcCAwEAAaOBkTCBjjAOBgNVHQ8BAf8EBAMCBaAwEwYDVR0l' - ' logger.go:42: 07:00:27 | watcher-tls-certs-change/1-deploy-with-tlse | BAwwCgYIKwYBBQUHAwEwDAYDVR0TAQH/BAIwADAfBgNVHSMEGDAWgBSEvgxWYqXo' - ' logger.go:42: 07:00:27 | watcher-tls-certs-change/1-deploy-with-tlse | VaasRUQLJzZAsO/HJTA4BgNVHREBAf8ELjAsgip3YXRjaGVyLWludGVybmFsLndh' - ' logger.go:42: 07:00:27 | watcher-tls-certs-change/1-deploy-with-tlse | dGNoZXIta3V0dGwtZGVmYXVsdC5zdmMwDQYJKoZIhvcNAQEMBQADggGBAJzAl8kI' - ' logger.go:42: 07:00:27 | watcher-tls-certs-change/1-deploy-with-tlse | fRCmnFBnPIq9+3XfefG0pybCCtUusBuoATJYDHmH/379Rb4zeN6bqXg7PP36qVao' - ' logger.go:42: 07:00:27 | watcher-tls-certs-change/1-deploy-with-tlse | 98lAd5LFEieOBPJq2+QXo8CWrLcbItzbOHdFeVStfpuQqaD2OMYLLepP/IpAKT+0' - ' logger.go:42: 07:00:27 | watcher-tls-certs-change/1-deploy-with-tlse | +GEHqjJsaHF3xj6g5HcvQzhSaQh3Pg8ffKmqJKQg95PF1BPQ110MawKYh9MBibPr' - ' logger.go:42: 07:00:27 | watcher-tls-certs-change/1-deploy-with-tlse | b0myrALIel2CeIeWqm+VMrMI8en/gFxJWTfg58GcQbRiUDzWa2BHE1mEoXmOLNDt' - ' logger.go:42: 07:00:27 | watcher-tls-certs-change/1-deploy-with-tlse | L1iYWnFugdfAycs2+uAkPMbpav3Oi3WJX1N1iKRklNv5Cq8oQ++b8aoR2RZPnJJE' - ' logger.go:42: 07:00:27 | watcher-tls-certs-change/1-deploy-with-tlse | ppmkEBU4Kt94NO5B/A2PnjkM8+yOQjhjVnyD5Ccp8kHx0MN7H1OW9J0m4JEPjP/c' - ' logger.go:42: 07:00:27 | watcher-tls-certs-change/1-deploy-with-tlse | uRTxWb4yoJl6Igdfy3tYgldteKztKgf4tDlkR/Htsf8GVL+zbsnTseYY5BakyygP' - ' logger.go:42: 07:00:27 | watcher-tls-certs-change/1-deploy-with-tlse | AHnxS0D1QCA5Zv5craVd+a2ZCNvbnSdl4pwShL796A0cNZGnAKDD1LwcYQ==' - ' logger.go:42: 07:00:27 | watcher-tls-certs-change/1-deploy-with-tlse | -----END CERTIFICATE-----'' '']''' - ' logger.go:42: 07:00:28 | watcher-tls-certs-change/1-deploy-with-tlse | running command: [sh -c set -euxo pipefail' - ' public_svc_cert=$(oc rsh -n $NAMESPACE -c watcher-api watcher-kuttl-api-0 cat /etc/pki/tls/certs/public.crt)' - ' public_secret_cert=$(oc get -n $NAMESPACE secret cert-watcher-public-svc -o jsonpath=''{.data.tls\.crt}'' | base64 --decode)' - ' # ensure that the svc secret and cert secret match' - ' if [ "${public_svc_cert}" != "${public_secret_cert}" ]; then' - ' exit 1' - ' fi' - ' ' - ' internal_svc_cert=$(oc rsh -n $NAMESPACE -c watcher-api watcher-kuttl-api-0 cat /etc/pki/tls/certs/internal.crt)' - ' internal_secret_cert=$(oc get -n $NAMESPACE secret cert-watcher-internal-svc -o jsonpath=''{.data.tls\.crt}'' | base64 --decode)' - ' # ensure that the svc secret and cert secret match' - ' if [ "${internal_svc_cert}" != "${internal_secret_cert}" ]; then' - ' exit 1' - ' fi' - ' ]' - ' logger.go:42: 07:00:28 | watcher-tls-certs-change/1-deploy-with-tlse | ++ oc rsh -n watcher-kuttl-default -c watcher-api watcher-kuttl-api-0 cat /etc/pki/tls/certs/public.crt' - ' logger.go:42: 07:00:28 | watcher-tls-certs-change/1-deploy-with-tlse | + public_svc_cert=''-----BEGIN CERTIFICATE-----' - ' logger.go:42: 07:00:28 | watcher-tls-certs-change/1-deploy-with-tlse | MIIDsjCCAhqgAwIBAgIQFVNmGwlW4BZq0NMjqP97cTANBgkqhkiG9w0BAQwFADAY' - ' logger.go:42: 07:00:28 | watcher-tls-certs-change/1-deploy-with-tlse | MRYwFAYDVQQDEw1yb290Y2EtcHVibGljMB4XDTI2MDEyMjA3MDAwN1oXDTMxMDEy' - ' logger.go:42: 07:00:28 | watcher-tls-certs-change/1-deploy-with-tlse | MTA3MDAwN1owADCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAMpNXlNH' - ' logger.go:42: 07:00:28 | watcher-tls-certs-change/1-deploy-with-tlse | +PgDkPuiPtY2Iy+iOVTneCp31gtcOAI8JIp8fgCvk7KwdimyxIUQ2cYxsZPnG0cy' - ' logger.go:42: 07:00:28 | watcher-tls-certs-change/1-deploy-with-tlse | wL4XpOQoqp0ElzP3DPP1FUOAnx7ctKdash5DUzBT+84oA2Hab8yd3D9RWVXgYTYP' - ' logger.go:42: 07:00:28 | watcher-tls-certs-change/1-deploy-with-tlse | tgSvfD60T1dJeiiJa5kBj0Gfj+0gKtYNueYrOo65hlC6Smd2tP/7av+xLtQ2b1JS' - ' logger.go:42: 07:00:28 | watcher-tls-certs-change/1-deploy-with-tlse | tFAWNznSY+2n+8SyIH6jjDb/oBYfANPOIcfU/4X8dTGzLgkpMMXK3KyXfzgD9EW+' - ' logger.go:42: 07:00:28 | watcher-tls-certs-change/1-deploy-with-tlse | HexV+da9g5Ix/I5gxku5bd0DWz/ovW+1o/uYMgB5ss9oqd8YKKSqkptXfAtoGToA' - ' logger.go:42: 07:00:28 | watcher-tls-certs-change/1-deploy-with-tlse | KG8QccenVwR7kEcCAwEAAaOBjzCBjDAOBgNVHQ8BAf8EBAMCBaAwEwYDVR0lBAww' - ' logger.go:42: 07:00:28 | watcher-tls-certs-change/1-deploy-with-tlse | CgYIKwYBBQUHAwEwDAYDVR0TAQH/BAIwADAfBgNVHSMEGDAWgBSy3xzAKuvmrfpX' - ' logger.go:42: 07:00:28 | watcher-tls-certs-change/1-deploy-with-tlse | EgmGJVm/ieLg8TA2BgNVHREBAf8ELDAqgih3YXRjaGVyLXB1YmxpYy53YXRjaGVy' - ' logger.go:42: 07:00:28 | watcher-tls-certs-change/1-deploy-with-tlse | LWt1dHRsLWRlZmF1bHQuc3ZjMA0GCSqGSIb3DQEBDAUAA4IBgQBmitaOVryjB1Fy' - ' logger.go:42: 07:00:28 | watcher-tls-certs-change/1-deploy-with-tlse | o+uQ7q1CfFEXu4uk57VffzLi6GFAot2u9gqfKm5Bx/JFKe0HvUHxfyum8oGhg8Mm' - ' logger.go:42: 07:00:28 | watcher-tls-certs-change/1-deploy-with-tlse | lF7bosLeepyWZeFYbXLZ3eCDrTyQRPPc8j6O8R9Ygg2wvIhwIrADK1/KTQuPVFW5' - ' logger.go:42: 07:00:28 | watcher-tls-certs-change/1-deploy-with-tlse | DVaC7MERErbmczp6DYiELdgcqlJimK3Vp/11qNtk1Kc7wVLI2Y36CzhwVMXOBW00' - ' logger.go:42: 07:00:28 | watcher-tls-certs-change/1-deploy-with-tlse | XXQxvYoWLH8WjhVMN9L0ZfTLdQIwL4gt0B3R7Uh8HR/qMCwW4RhIByG/el6wH1yh' - ' logger.go:42: 07:00:28 | watcher-tls-certs-change/1-deploy-with-tlse | D5gkY2Jkbf1vvmmw6MBLkx5O8aL145sW8fDKMGrmRj+eO5ALLeYMdmQZXqwZ4URw' - ' logger.go:42: 07:00:28 | watcher-tls-certs-change/1-deploy-with-tlse | X1tyjzN1ntrcBJf6xnuMLfT1v9RPMCT4CPeb23M5lqvwQXdrbiarkLQ9n/mBPUA6' - ' logger.go:42: 07:00:28 | watcher-tls-certs-change/1-deploy-with-tlse | lUYWBkFqYg9XnAgkWTeehQ7+AeOtvhj6gu3st6nEkOjJs9ri3Jris8kGL9Ay5yMy' - ' logger.go:42: 07:00:28 | watcher-tls-certs-change/1-deploy-with-tlse | af6ByIVzqJuVxr9GJiFIvhx+t2NEelQUiFnN1465Q67x/AELrfo=' - ' logger.go:42: 07:00:28 | watcher-tls-certs-change/1-deploy-with-tlse | -----END CERTIFICATE-----''' - ' logger.go:42: 07:00:28 | watcher-tls-certs-change/1-deploy-with-tlse | ++ oc get -n watcher-kuttl-default secret cert-watcher-public-svc -o ''jsonpath={.data.tls\.crt}''' - ' logger.go:42: 07:00:28 | watcher-tls-certs-change/1-deploy-with-tlse | ++ base64 --decode' - ' logger.go:42: 07:00:29 | watcher-tls-certs-change/1-deploy-with-tlse | + public_secret_cert=''-----BEGIN CERTIFICATE-----' - ' logger.go:42: 07:00:29 | watcher-tls-certs-change/1-deploy-with-tlse | MIIDsjCCAhqgAwIBAgIQFVNmGwlW4BZq0NMjqP97cTANBgkqhkiG9w0BAQwFADAY' - ' logger.go:42: 07:00:29 | watcher-tls-certs-change/1-deploy-with-tlse | MRYwFAYDVQQDEw1yb290Y2EtcHVibGljMB4XDTI2MDEyMjA3MDAwN1oXDTMxMDEy' - ' logger.go:42: 07:00:29 | watcher-tls-certs-change/1-deploy-with-tlse | MTA3MDAwN1owADCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAMpNXlNH' - ' logger.go:42: 07:00:29 | watcher-tls-certs-change/1-deploy-with-tlse | +PgDkPuiPtY2Iy+iOVTneCp31gtcOAI8JIp8fgCvk7KwdimyxIUQ2cYxsZPnG0cy' - ' logger.go:42: 07:00:29 | watcher-tls-certs-change/1-deploy-with-tlse | wL4XpOQoqp0ElzP3DPP1FUOAnx7ctKdash5DUzBT+84oA2Hab8yd3D9RWVXgYTYP' - ' logger.go:42: 07:00:29 | watcher-tls-certs-change/1-deploy-with-tlse | tgSvfD60T1dJeiiJa5kBj0Gfj+0gKtYNueYrOo65hlC6Smd2tP/7av+xLtQ2b1JS' - ' logger.go:42: 07:00:29 | watcher-tls-certs-change/1-deploy-with-tlse | tFAWNznSY+2n+8SyIH6jjDb/oBYfANPOIcfU/4X8dTGzLgkpMMXK3KyXfzgD9EW+' - ' logger.go:42: 07:00:29 | watcher-tls-certs-change/1-deploy-with-tlse | HexV+da9g5Ix/I5gxku5bd0DWz/ovW+1o/uYMgB5ss9oqd8YKKSqkptXfAtoGToA' - ' logger.go:42: 07:00:29 | watcher-tls-certs-change/1-deploy-with-tlse | KG8QccenVwR7kEcCAwEAAaOBjzCBjDAOBgNVHQ8BAf8EBAMCBaAwEwYDVR0lBAww' - ' logger.go:42: 07:00:29 | watcher-tls-certs-change/1-deploy-with-tlse | CgYIKwYBBQUHAwEwDAYDVR0TAQH/BAIwADAfBgNVHSMEGDAWgBSy3xzAKuvmrfpX' - ' logger.go:42: 07:00:29 | watcher-tls-certs-change/1-deploy-with-tlse | EgmGJVm/ieLg8TA2BgNVHREBAf8ELDAqgih3YXRjaGVyLXB1YmxpYy53YXRjaGVy' - ' logger.go:42: 07:00:29 | watcher-tls-certs-change/1-deploy-with-tlse | LWt1dHRsLWRlZmF1bHQuc3ZjMA0GCSqGSIb3DQEBDAUAA4IBgQBmitaOVryjB1Fy' - ' logger.go:42: 07:00:29 | watcher-tls-certs-change/1-deploy-with-tlse | o+uQ7q1CfFEXu4uk57VffzLi6GFAot2u9gqfKm5Bx/JFKe0HvUHxfyum8oGhg8Mm' - ' logger.go:42: 07:00:29 | watcher-tls-certs-change/1-deploy-with-tlse | lF7bosLeepyWZeFYbXLZ3eCDrTyQRPPc8j6O8R9Ygg2wvIhwIrADK1/KTQuPVFW5' - ' logger.go:42: 07:00:29 | watcher-tls-certs-change/1-deploy-with-tlse | DVaC7MERErbmczp6DYiELdgcqlJimK3Vp/11qNtk1Kc7wVLI2Y36CzhwVMXOBW00' - ' logger.go:42: 07:00:29 | watcher-tls-certs-change/1-deploy-with-tlse | XXQxvYoWLH8WjhVMN9L0ZfTLdQIwL4gt0B3R7Uh8HR/qMCwW4RhIByG/el6wH1yh' - ' logger.go:42: 07:00:29 | watcher-tls-certs-change/1-deploy-with-tlse | D5gkY2Jkbf1vvmmw6MBLkx5O8aL145sW8fDKMGrmRj+eO5ALLeYMdmQZXqwZ4URw' - ' logger.go:42: 07:00:29 | watcher-tls-certs-change/1-deploy-with-tlse | X1tyjzN1ntrcBJf6xnuMLfT1v9RPMCT4CPeb23M5lqvwQXdrbiarkLQ9n/mBPUA6' - ' logger.go:42: 07:00:29 | watcher-tls-certs-change/1-deploy-with-tlse | lUYWBkFqYg9XnAgkWTeehQ7+AeOtvhj6gu3st6nEkOjJs9ri3Jris8kGL9Ay5yMy' - ' logger.go:42: 07:00:29 | watcher-tls-certs-change/1-deploy-with-tlse | af6ByIVzqJuVxr9GJiFIvhx+t2NEelQUiFnN1465Q67x/AELrfo=' - ' logger.go:42: 07:00:29 | watcher-tls-certs-change/1-deploy-with-tlse | -----END CERTIFICATE-----''' - ' logger.go:42: 07:00:29 | watcher-tls-certs-change/1-deploy-with-tlse | + ''['' ''-----BEGIN CERTIFICATE-----' - ' logger.go:42: 07:00:29 | watcher-tls-certs-change/1-deploy-with-tlse | MIIDsjCCAhqgAwIBAgIQFVNmGwlW4BZq0NMjqP97cTANBgkqhkiG9w0BAQwFADAY' - ' logger.go:42: 07:00:29 | watcher-tls-certs-change/1-deploy-with-tlse | MRYwFAYDVQQDEw1yb290Y2EtcHVibGljMB4XDTI2MDEyMjA3MDAwN1oXDTMxMDEy' - ' logger.go:42: 07:00:29 | watcher-tls-certs-change/1-deploy-with-tlse | MTA3MDAwN1owADCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAMpNXlNH' - ' logger.go:42: 07:00:29 | watcher-tls-certs-change/1-deploy-with-tlse | +PgDkPuiPtY2Iy+iOVTneCp31gtcOAI8JIp8fgCvk7KwdimyxIUQ2cYxsZPnG0cy' - ' logger.go:42: 07:00:29 | watcher-tls-certs-change/1-deploy-with-tlse | wL4XpOQoqp0ElzP3DPP1FUOAnx7ctKdash5DUzBT+84oA2Hab8yd3D9RWVXgYTYP' - ' logger.go:42: 07:00:29 | watcher-tls-certs-change/1-deploy-with-tlse | tgSvfD60T1dJeiiJa5kBj0Gfj+0gKtYNueYrOo65hlC6Smd2tP/7av+xLtQ2b1JS' - ' logger.go:42: 07:00:29 | watcher-tls-certs-change/1-deploy-with-tlse | tFAWNznSY+2n+8SyIH6jjDb/oBYfANPOIcfU/4X8dTGzLgkpMMXK3KyXfzgD9EW+' - ' logger.go:42: 07:00:29 | watcher-tls-certs-change/1-deploy-with-tlse | HexV+da9g5Ix/I5gxku5bd0DWz/ovW+1o/uYMgB5ss9oqd8YKKSqkptXfAtoGToA' - ' logger.go:42: 07:00:29 | watcher-tls-certs-change/1-deploy-with-tlse | KG8QccenVwR7kEcCAwEAAaOBjzCBjDAOBgNVHQ8BAf8EBAMCBaAwEwYDVR0lBAww' - ' logger.go:42: 07:00:29 | watcher-tls-certs-change/1-deploy-with-tlse | CgYIKwYBBQUHAwEwDAYDVR0TAQH/BAIwADAfBgNVHSMEGDAWgBSy3xzAKuvmrfpX' - ' logger.go:42: 07:00:29 | watcher-tls-certs-change/1-deploy-with-tlse | EgmGJVm/ieLg8TA2BgNVHREBAf8ELDAqgih3YXRjaGVyLXB1YmxpYy53YXRjaGVy' - ' logger.go:42: 07:00:29 | watcher-tls-certs-change/1-deploy-with-tlse | LWt1dHRsLWRlZmF1bHQuc3ZjMA0GCSqGSIb3DQEBDAUAA4IBgQBmitaOVryjB1Fy' - ' logger.go:42: 07:00:29 | watcher-tls-certs-change/1-deploy-with-tlse | o+uQ7q1CfFEXu4uk57VffzLi6GFAot2u9gqfKm5Bx/JFKe0HvUHxfyum8oGhg8Mm' - ' logger.go:42: 07:00:29 | watcher-tls-certs-change/1-deploy-with-tlse | lF7bosLeepyWZeFYbXLZ3eCDrTyQRPPc8j6O8R9Ygg2wvIhwIrADK1/KTQuPVFW5' - ' logger.go:42: 07:00:29 | watcher-tls-certs-change/1-deploy-with-tlse | DVaC7MERErbmczp6DYiELdgcqlJimK3Vp/11qNtk1Kc7wVLI2Y36CzhwVMXOBW00' - ' logger.go:42: 07:00:29 | watcher-tls-certs-change/1-deploy-with-tlse | XXQxvYoWLH8WjhVMN9L0ZfTLdQIwL4gt0B3R7Uh8HR/qMCwW4RhIByG/el6wH1yh' - ' logger.go:42: 07:00:29 | watcher-tls-certs-change/1-deploy-with-tlse | D5gkY2Jkbf1vvmmw6MBLkx5O8aL145sW8fDKMGrmRj+eO5ALLeYMdmQZXqwZ4URw' - ' logger.go:42: 07:00:29 | watcher-tls-certs-change/1-deploy-with-tlse | X1tyjzN1ntrcBJf6xnuMLfT1v9RPMCT4CPeb23M5lqvwQXdrbiarkLQ9n/mBPUA6' - ' logger.go:42: 07:00:29 | watcher-tls-certs-change/1-deploy-with-tlse | lUYWBkFqYg9XnAgkWTeehQ7+AeOtvhj6gu3st6nEkOjJs9ri3Jris8kGL9Ay5yMy' - ' logger.go:42: 07:00:29 | watcher-tls-certs-change/1-deploy-with-tlse | af6ByIVzqJuVxr9GJiFIvhx+t2NEelQUiFnN1465Q67x/AELrfo=' - ' logger.go:42: 07:00:29 | watcher-tls-certs-change/1-deploy-with-tlse | -----END CERTIFICATE-----'' ''!='' ''-----BEGIN CERTIFICATE-----' - ' logger.go:42: 07:00:29 | watcher-tls-certs-change/1-deploy-with-tlse | MIIDsjCCAhqgAwIBAgIQFVNmGwlW4BZq0NMjqP97cTANBgkqhkiG9w0BAQwFADAY' - ' logger.go:42: 07:00:29 | watcher-tls-certs-change/1-deploy-with-tlse | MRYwFAYDVQQDEw1yb290Y2EtcHVibGljMB4XDTI2MDEyMjA3MDAwN1oXDTMxMDEy' - ' logger.go:42: 07:00:29 | watcher-tls-certs-change/1-deploy-with-tlse | MTA3MDAwN1owADCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAMpNXlNH' - ' logger.go:42: 07:00:29 | watcher-tls-certs-change/1-deploy-with-tlse | +PgDkPuiPtY2Iy+iOVTneCp31gtcOAI8JIp8fgCvk7KwdimyxIUQ2cYxsZPnG0cy' - ' logger.go:42: 07:00:29 | watcher-tls-certs-change/1-deploy-with-tlse | wL4XpOQoqp0ElzP3DPP1FUOAnx7ctKdash5DUzBT+84oA2Hab8yd3D9RWVXgYTYP' - ' logger.go:42: 07:00:29 | watcher-tls-certs-change/1-deploy-with-tlse | tgSvfD60T1dJeiiJa5kBj0Gfj+0gKtYNueYrOo65hlC6Smd2tP/7av+xLtQ2b1JS' - ' logger.go:42: 07:00:29 | watcher-tls-certs-change/1-deploy-with-tlse | tFAWNznSY+2n+8SyIH6jjDb/oBYfANPOIcfU/4X8dTGzLgkpMMXK3KyXfzgD9EW+' - ' logger.go:42: 07:00:29 | watcher-tls-certs-change/1-deploy-with-tlse | HexV+da9g5Ix/I5gxku5bd0DWz/ovW+1o/uYMgB5ss9oqd8YKKSqkptXfAtoGToA' - ' logger.go:42: 07:00:29 | watcher-tls-certs-change/1-deploy-with-tlse | KG8QccenVwR7kEcCAwEAAaOBjzCBjDAOBgNVHQ8BAf8EBAMCBaAwEwYDVR0lBAww' - ' logger.go:42: 07:00:29 | watcher-tls-certs-change/1-deploy-with-tlse | CgYIKwYBBQUHAwEwDAYDVR0TAQH/BAIwADAfBgNVHSMEGDAWgBSy3xzAKuvmrfpX' - ' logger.go:42: 07:00:29 | watcher-tls-certs-change/1-deploy-with-tlse | EgmGJVm/ieLg8TA2BgNVHREBAf8ELDAqgih3YXRjaGVyLXB1YmxpYy53YXRjaGVy' - ' logger.go:42: 07:00:29 | watcher-tls-certs-change/1-deploy-with-tlse | LWt1dHRsLWRlZmF1bHQuc3ZjMA0GCSqGSIb3DQEBDAUAA4IBgQBmitaOVryjB1Fy' - ' logger.go:42: 07:00:29 | watcher-tls-certs-change/1-deploy-with-tlse | o+uQ7q1CfFEXu4uk57VffzLi6GFAot2u9gqfKm5Bx/JFKe0HvUHxfyum8oGhg8Mm' - ' logger.go:42: 07:00:29 | watcher-tls-certs-change/1-deploy-with-tlse | lF7bosLeepyWZeFYbXLZ3eCDrTyQRPPc8j6O8R9Ygg2wvIhwIrADK1/KTQuPVFW5' - ' logger.go:42: 07:00:29 | watcher-tls-certs-change/1-deploy-with-tlse | DVaC7MERErbmczp6DYiELdgcqlJimK3Vp/11qNtk1Kc7wVLI2Y36CzhwVMXOBW00' - ' logger.go:42: 07:00:29 | watcher-tls-certs-change/1-deploy-with-tlse | XXQxvYoWLH8WjhVMN9L0ZfTLdQIwL4gt0B3R7Uh8HR/qMCwW4RhIByG/el6wH1yh' - ' logger.go:42: 07:00:29 | watcher-tls-certs-change/1-deploy-with-tlse | D5gkY2Jkbf1vvmmw6MBLkx5O8aL145sW8fDKMGrmRj+eO5ALLeYMdmQZXqwZ4URw' - ' logger.go:42: 07:00:29 | watcher-tls-certs-change/1-deploy-with-tlse | X1tyjzN1ntrcBJf6xnuMLfT1v9RPMCT4CPeb23M5lqvwQXdrbiarkLQ9n/mBPUA6' - ' logger.go:42: 07:00:29 | watcher-tls-certs-change/1-deploy-with-tlse | lUYWBkFqYg9XnAgkWTeehQ7+AeOtvhj6gu3st6nEkOjJs9ri3Jris8kGL9Ay5yMy' - ' logger.go:42: 07:00:29 | watcher-tls-certs-change/1-deploy-with-tlse | af6ByIVzqJuVxr9GJiFIvhx+t2NEelQUiFnN1465Q67x/AELrfo=' - ' logger.go:42: 07:00:29 | watcher-tls-certs-change/1-deploy-with-tlse | -----END CERTIFICATE-----'' '']''' - ' logger.go:42: 07:00:29 | watcher-tls-certs-change/1-deploy-with-tlse | ++ oc rsh -n watcher-kuttl-default -c watcher-api watcher-kuttl-api-0 cat /etc/pki/tls/certs/internal.crt' - ' logger.go:42: 07:00:29 | watcher-tls-certs-change/1-deploy-with-tlse | + internal_svc_cert=''-----BEGIN CERTIFICATE-----' - ' logger.go:42: 07:00:29 | watcher-tls-certs-change/1-deploy-with-tlse | MIIDtzCCAh+gAwIBAgIRAMdtA66/UVErgQ6jB2Mo7g8wDQYJKoZIhvcNAQEMBQAw' - ' logger.go:42: 07:00:29 | watcher-tls-certs-change/1-deploy-with-tlse | GjEYMBYGA1UEAxMPcm9vdGNhLWludGVybmFsMB4XDTI2MDEyMjA3MDAwN1oXDTMx' - ' logger.go:42: 07:00:29 | watcher-tls-certs-change/1-deploy-with-tlse | MDEyMTA3MDAwN1owADCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAOzY' - ' logger.go:42: 07:00:29 | watcher-tls-certs-change/1-deploy-with-tlse | hUVeRLlVB60sJbdpaQAJI2vpbeEEDa91MsWoYlWF58nvtb17k8KGvpPNemDF4UvD' - ' logger.go:42: 07:00:29 | watcher-tls-certs-change/1-deploy-with-tlse | kRm+oSlbUhWwlbwI1Q3VFz4JhvU12PvdUhV4e5PbNGULUDGpAbr04w098HEQSUj+' - ' logger.go:42: 07:00:29 | watcher-tls-certs-change/1-deploy-with-tlse | D/44rQFA0PbvLa//wISJcF6NG2kZiiAFAlrfYJH5tJM2nbdDOvHRY2FScNwLw3aE' - ' logger.go:42: 07:00:29 | watcher-tls-certs-change/1-deploy-with-tlse | 1UGIp9tmMA1on0l+9tBcndNzSq8klq5TdwDo8Pwrqz8m37ZXvwD1gizzDXwZhiPN' - ' logger.go:42: 07:00:29 | watcher-tls-certs-change/1-deploy-with-tlse | KIvLKp0hAxvswvj5Rib80zGQX6JjJEcOBFZ4GT/e8WUjrsJ6JsTygAtF9Y8xwchO' - ' logger.go:42: 07:00:29 | watcher-tls-certs-change/1-deploy-with-tlse | Ce4xdKeeGZ8hDK2ojFcCAwEAAaOBkTCBjjAOBgNVHQ8BAf8EBAMCBaAwEwYDVR0l' - ' logger.go:42: 07:00:29 | watcher-tls-certs-change/1-deploy-with-tlse | BAwwCgYIKwYBBQUHAwEwDAYDVR0TAQH/BAIwADAfBgNVHSMEGDAWgBSEvgxWYqXo' - ' logger.go:42: 07:00:29 | watcher-tls-certs-change/1-deploy-with-tlse | VaasRUQLJzZAsO/HJTA4BgNVHREBAf8ELjAsgip3YXRjaGVyLWludGVybmFsLndh' - ' logger.go:42: 07:00:29 | watcher-tls-certs-change/1-deploy-with-tlse | dGNoZXIta3V0dGwtZGVmYXVsdC5zdmMwDQYJKoZIhvcNAQEMBQADggGBAJzAl8kI' - ' logger.go:42: 07:00:29 | watcher-tls-certs-change/1-deploy-with-tlse | fRCmnFBnPIq9+3XfefG0pybCCtUusBuoATJYDHmH/379Rb4zeN6bqXg7PP36qVao' - ' logger.go:42: 07:00:29 | watcher-tls-certs-change/1-deploy-with-tlse | 98lAd5LFEieOBPJq2+QXo8CWrLcbItzbOHdFeVStfpuQqaD2OMYLLepP/IpAKT+0' - ' logger.go:42: 07:00:29 | watcher-tls-certs-change/1-deploy-with-tlse | +GEHqjJsaHF3xj6g5HcvQzhSaQh3Pg8ffKmqJKQg95PF1BPQ110MawKYh9MBibPr' - ' logger.go:42: 07:00:29 | watcher-tls-certs-change/1-deploy-with-tlse | b0myrALIel2CeIeWqm+VMrMI8en/gFxJWTfg58GcQbRiUDzWa2BHE1mEoXmOLNDt' - ' logger.go:42: 07:00:29 | watcher-tls-certs-change/1-deploy-with-tlse | L1iYWnFugdfAycs2+uAkPMbpav3Oi3WJX1N1iKRklNv5Cq8oQ++b8aoR2RZPnJJE' - ' logger.go:42: 07:00:29 | watcher-tls-certs-change/1-deploy-with-tlse | ppmkEBU4Kt94NO5B/A2PnjkM8+yOQjhjVnyD5Ccp8kHx0MN7H1OW9J0m4JEPjP/c' - ' logger.go:42: 07:00:29 | watcher-tls-certs-change/1-deploy-with-tlse | uRTxWb4yoJl6Igdfy3tYgldteKztKgf4tDlkR/Htsf8GVL+zbsnTseYY5BakyygP' - ' logger.go:42: 07:00:29 | watcher-tls-certs-change/1-deploy-with-tlse | AHnxS0D1QCA5Zv5craVd+a2ZCNvbnSdl4pwShL796A0cNZGnAKDD1LwcYQ==' - ' logger.go:42: 07:00:29 | watcher-tls-certs-change/1-deploy-with-tlse | -----END CERTIFICATE-----''' - ' logger.go:42: 07:00:29 | watcher-tls-certs-change/1-deploy-with-tlse | ++ oc get -n watcher-kuttl-default secret cert-watcher-internal-svc -o ''jsonpath={.data.tls\.crt}''' - ' logger.go:42: 07:00:29 | watcher-tls-certs-change/1-deploy-with-tlse | ++ base64 --decode' - ' logger.go:42: 07:00:29 | watcher-tls-certs-change/1-deploy-with-tlse | + internal_secret_cert=''-----BEGIN CERTIFICATE-----' - ' logger.go:42: 07:00:29 | watcher-tls-certs-change/1-deploy-with-tlse | MIIDtzCCAh+gAwIBAgIRAMdtA66/UVErgQ6jB2Mo7g8wDQYJKoZIhvcNAQEMBQAw' - ' logger.go:42: 07:00:29 | watcher-tls-certs-change/1-deploy-with-tlse | GjEYMBYGA1UEAxMPcm9vdGNhLWludGVybmFsMB4XDTI2MDEyMjA3MDAwN1oXDTMx' - ' logger.go:42: 07:00:29 | watcher-tls-certs-change/1-deploy-with-tlse | MDEyMTA3MDAwN1owADCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAOzY' - ' logger.go:42: 07:00:29 | watcher-tls-certs-change/1-deploy-with-tlse | hUVeRLlVB60sJbdpaQAJI2vpbeEEDa91MsWoYlWF58nvtb17k8KGvpPNemDF4UvD' - ' logger.go:42: 07:00:29 | watcher-tls-certs-change/1-deploy-with-tlse | kRm+oSlbUhWwlbwI1Q3VFz4JhvU12PvdUhV4e5PbNGULUDGpAbr04w098HEQSUj+' - ' logger.go:42: 07:00:29 | watcher-tls-certs-change/1-deploy-with-tlse | D/44rQFA0PbvLa//wISJcF6NG2kZiiAFAlrfYJH5tJM2nbdDOvHRY2FScNwLw3aE' - ' logger.go:42: 07:00:29 | watcher-tls-certs-change/1-deploy-with-tlse | 1UGIp9tmMA1on0l+9tBcndNzSq8klq5TdwDo8Pwrqz8m37ZXvwD1gizzDXwZhiPN' - ' logger.go:42: 07:00:29 | watcher-tls-certs-change/1-deploy-with-tlse | KIvLKp0hAxvswvj5Rib80zGQX6JjJEcOBFZ4GT/e8WUjrsJ6JsTygAtF9Y8xwchO' - ' logger.go:42: 07:00:29 | watcher-tls-certs-change/1-deploy-with-tlse | Ce4xdKeeGZ8hDK2ojFcCAwEAAaOBkTCBjjAOBgNVHQ8BAf8EBAMCBaAwEwYDVR0l' - ' logger.go:42: 07:00:29 | watcher-tls-certs-change/1-deploy-with-tlse | BAwwCgYIKwYBBQUHAwEwDAYDVR0TAQH/BAIwADAfBgNVHSMEGDAWgBSEvgxWYqXo' - ' logger.go:42: 07:00:29 | watcher-tls-certs-change/1-deploy-with-tlse | VaasRUQLJzZAsO/HJTA4BgNVHREBAf8ELjAsgip3YXRjaGVyLWludGVybmFsLndh' - ' logger.go:42: 07:00:29 | watcher-tls-certs-change/1-deploy-with-tlse | dGNoZXIta3V0dGwtZGVmYXVsdC5zdmMwDQYJKoZIhvcNAQEMBQADggGBAJzAl8kI' - ' logger.go:42: 07:00:29 | watcher-tls-certs-change/1-deploy-with-tlse | fRCmnFBnPIq9+3XfefG0pybCCtUusBuoATJYDHmH/379Rb4zeN6bqXg7PP36qVao' - ' logger.go:42: 07:00:29 | watcher-tls-certs-change/1-deploy-with-tlse | 98lAd5LFEieOBPJq2+QXo8CWrLcbItzbOHdFeVStfpuQqaD2OMYLLepP/IpAKT+0' - ' logger.go:42: 07:00:29 | watcher-tls-certs-change/1-deploy-with-tlse | +GEHqjJsaHF3xj6g5HcvQzhSaQh3Pg8ffKmqJKQg95PF1BPQ110MawKYh9MBibPr' - ' logger.go:42: 07:00:29 | watcher-tls-certs-change/1-deploy-with-tlse | b0myrALIel2CeIeWqm+VMrMI8en/gFxJWTfg58GcQbRiUDzWa2BHE1mEoXmOLNDt' - ' logger.go:42: 07:00:29 | watcher-tls-certs-change/1-deploy-with-tlse | L1iYWnFugdfAycs2+uAkPMbpav3Oi3WJX1N1iKRklNv5Cq8oQ++b8aoR2RZPnJJE' - ' logger.go:42: 07:00:29 | watcher-tls-certs-change/1-deploy-with-tlse | ppmkEBU4Kt94NO5B/A2PnjkM8+yOQjhjVnyD5Ccp8kHx0MN7H1OW9J0m4JEPjP/c' - ' logger.go:42: 07:00:29 | watcher-tls-certs-change/1-deploy-with-tlse | uRTxWb4yoJl6Igdfy3tYgldteKztKgf4tDlkR/Htsf8GVL+zbsnTseYY5BakyygP' - ' logger.go:42: 07:00:29 | watcher-tls-certs-change/1-deploy-with-tlse | AHnxS0D1QCA5Zv5craVd+a2ZCNvbnSdl4pwShL796A0cNZGnAKDD1LwcYQ==' - ' logger.go:42: 07:00:29 | watcher-tls-certs-change/1-deploy-with-tlse | -----END CERTIFICATE-----''' - ' logger.go:42: 07:00:29 | watcher-tls-certs-change/1-deploy-with-tlse | + ''['' ''-----BEGIN CERTIFICATE-----' - ' logger.go:42: 07:00:29 | watcher-tls-certs-change/1-deploy-with-tlse | MIIDtzCCAh+gAwIBAgIRAMdtA66/UVErgQ6jB2Mo7g8wDQYJKoZIhvcNAQEMBQAw' - ' logger.go:42: 07:00:29 | watcher-tls-certs-change/1-deploy-with-tlse | GjEYMBYGA1UEAxMPcm9vdGNhLWludGVybmFsMB4XDTI2MDEyMjA3MDAwN1oXDTMx' - ' logger.go:42: 07:00:29 | watcher-tls-certs-change/1-deploy-with-tlse | MDEyMTA3MDAwN1owADCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAOzY' - ' logger.go:42: 07:00:29 | watcher-tls-certs-change/1-deploy-with-tlse | hUVeRLlVB60sJbdpaQAJI2vpbeEEDa91MsWoYlWF58nvtb17k8KGvpPNemDF4UvD' - ' logger.go:42: 07:00:29 | watcher-tls-certs-change/1-deploy-with-tlse | kRm+oSlbUhWwlbwI1Q3VFz4JhvU12PvdUhV4e5PbNGULUDGpAbr04w098HEQSUj+' - ' logger.go:42: 07:00:29 | watcher-tls-certs-change/1-deploy-with-tlse | D/44rQFA0PbvLa//wISJcF6NG2kZiiAFAlrfYJH5tJM2nbdDOvHRY2FScNwLw3aE' - ' logger.go:42: 07:00:29 | watcher-tls-certs-change/1-deploy-with-tlse | 1UGIp9tmMA1on0l+9tBcndNzSq8klq5TdwDo8Pwrqz8m37ZXvwD1gizzDXwZhiPN' - ' logger.go:42: 07:00:29 | watcher-tls-certs-change/1-deploy-with-tlse | KIvLKp0hAxvswvj5Rib80zGQX6JjJEcOBFZ4GT/e8WUjrsJ6JsTygAtF9Y8xwchO' - ' logger.go:42: 07:00:29 | watcher-tls-certs-change/1-deploy-with-tlse | Ce4xdKeeGZ8hDK2ojFcCAwEAAaOBkTCBjjAOBgNVHQ8BAf8EBAMCBaAwEwYDVR0l' - ' logger.go:42: 07:00:29 | watcher-tls-certs-change/1-deploy-with-tlse | BAwwCgYIKwYBBQUHAwEwDAYDVR0TAQH/BAIwADAfBgNVHSMEGDAWgBSEvgxWYqXo' - ' logger.go:42: 07:00:29 | watcher-tls-certs-change/1-deploy-with-tlse | VaasRUQLJzZAsO/HJTA4BgNVHREBAf8ELjAsgip3YXRjaGVyLWludGVybmFsLndh' - ' logger.go:42: 07:00:29 | watcher-tls-certs-change/1-deploy-with-tlse | dGNoZXIta3V0dGwtZGVmYXVsdC5zdmMwDQYJKoZIhvcNAQEMBQADggGBAJzAl8kI' - ' logger.go:42: 07:00:29 | watcher-tls-certs-change/1-deploy-with-tlse | fRCmnFBnPIq9+3XfefG0pybCCtUusBuoATJYDHmH/379Rb4zeN6bqXg7PP36qVao' - ' logger.go:42: 07:00:29 | watcher-tls-certs-change/1-deploy-with-tlse | 98lAd5LFEieOBPJq2+QXo8CWrLcbItzbOHdFeVStfpuQqaD2OMYLLepP/IpAKT+0' - ' logger.go:42: 07:00:29 | watcher-tls-certs-change/1-deploy-with-tlse | +GEHqjJsaHF3xj6g5HcvQzhSaQh3Pg8ffKmqJKQg95PF1BPQ110MawKYh9MBibPr' - ' logger.go:42: 07:00:29 | watcher-tls-certs-change/1-deploy-with-tlse | b0myrALIel2CeIeWqm+VMrMI8en/gFxJWTfg58GcQbRiUDzWa2BHE1mEoXmOLNDt' - ' logger.go:42: 07:00:29 | watcher-tls-certs-change/1-deploy-with-tlse | L1iYWnFugdfAycs2+uAkPMbpav3Oi3WJX1N1iKRklNv5Cq8oQ++b8aoR2RZPnJJE' - ' logger.go:42: 07:00:29 | watcher-tls-certs-change/1-deploy-with-tlse | ppmkEBU4Kt94NO5B/A2PnjkM8+yOQjhjVnyD5Ccp8kHx0MN7H1OW9J0m4JEPjP/c' - ' logger.go:42: 07:00:29 | watcher-tls-certs-change/1-deploy-with-tlse | uRTxWb4yoJl6Igdfy3tYgldteKztKgf4tDlkR/Htsf8GVL+zbsnTseYY5BakyygP' - ' logger.go:42: 07:00:29 | watcher-tls-certs-change/1-deploy-with-tlse | AHnxS0D1QCA5Zv5craVd+a2ZCNvbnSdl4pwShL796A0cNZGnAKDD1LwcYQ==' - ' logger.go:42: 07:00:29 | watcher-tls-certs-change/1-deploy-with-tlse | -----END CERTIFICATE-----'' ''!='' ''-----BEGIN CERTIFICATE-----' - ' logger.go:42: 07:00:29 | watcher-tls-certs-change/1-deploy-with-tlse | MIIDtzCCAh+gAwIBAgIRAMdtA66/UVErgQ6jB2Mo7g8wDQYJKoZIhvcNAQEMBQAw' - ' logger.go:42: 07:00:29 | watcher-tls-certs-change/1-deploy-with-tlse | GjEYMBYGA1UEAxMPcm9vdGNhLWludGVybmFsMB4XDTI2MDEyMjA3MDAwN1oXDTMx' - ' logger.go:42: 07:00:29 | watcher-tls-certs-change/1-deploy-with-tlse | MDEyMTA3MDAwN1owADCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAOzY' - ' logger.go:42: 07:00:29 | watcher-tls-certs-change/1-deploy-with-tlse | hUVeRLlVB60sJbdpaQAJI2vpbeEEDa91MsWoYlWF58nvtb17k8KGvpPNemDF4UvD' - ' logger.go:42: 07:00:29 | watcher-tls-certs-change/1-deploy-with-tlse | kRm+oSlbUhWwlbwI1Q3VFz4JhvU12PvdUhV4e5PbNGULUDGpAbr04w098HEQSUj+' - ' logger.go:42: 07:00:29 | watcher-tls-certs-change/1-deploy-with-tlse | D/44rQFA0PbvLa//wISJcF6NG2kZiiAFAlrfYJH5tJM2nbdDOvHRY2FScNwLw3aE' - ' logger.go:42: 07:00:29 | watcher-tls-certs-change/1-deploy-with-tlse | 1UGIp9tmMA1on0l+9tBcndNzSq8klq5TdwDo8Pwrqz8m37ZXvwD1gizzDXwZhiPN' - ' logger.go:42: 07:00:29 | watcher-tls-certs-change/1-deploy-with-tlse | KIvLKp0hAxvswvj5Rib80zGQX6JjJEcOBFZ4GT/e8WUjrsJ6JsTygAtF9Y8xwchO' - ' logger.go:42: 07:00:29 | watcher-tls-certs-change/1-deploy-with-tlse | Ce4xdKeeGZ8hDK2ojFcCAwEAAaOBkTCBjjAOBgNVHQ8BAf8EBAMCBaAwEwYDVR0l' - ' logger.go:42: 07:00:29 | watcher-tls-certs-change/1-deploy-with-tlse | BAwwCgYIKwYBBQUHAwEwDAYDVR0TAQH/BAIwADAfBgNVHSMEGDAWgBSEvgxWYqXo' - ' logger.go:42: 07:00:29 | watcher-tls-certs-change/1-deploy-with-tlse | VaasRUQLJzZAsO/HJTA4BgNVHREBAf8ELjAsgip3YXRjaGVyLWludGVybmFsLndh' - ' logger.go:42: 07:00:29 | watcher-tls-certs-change/1-deploy-with-tlse | dGNoZXIta3V0dGwtZGVmYXVsdC5zdmMwDQYJKoZIhvcNAQEMBQADggGBAJzAl8kI' - ' logger.go:42: 07:00:29 | watcher-tls-certs-change/1-deploy-with-tlse | fRCmnFBnPIq9+3XfefG0pybCCtUusBuoATJYDHmH/379Rb4zeN6bqXg7PP36qVao' - ' logger.go:42: 07:00:29 | watcher-tls-certs-change/1-deploy-with-tlse | 98lAd5LFEieOBPJq2+QXo8CWrLcbItzbOHdFeVStfpuQqaD2OMYLLepP/IpAKT+0' - ' logger.go:42: 07:00:29 | watcher-tls-certs-change/1-deploy-with-tlse | +GEHqjJsaHF3xj6g5HcvQzhSaQh3Pg8ffKmqJKQg95PF1BPQ110MawKYh9MBibPr' - ' logger.go:42: 07:00:29 | watcher-tls-certs-change/1-deploy-with-tlse | b0myrALIel2CeIeWqm+VMrMI8en/gFxJWTfg58GcQbRiUDzWa2BHE1mEoXmOLNDt' - ' logger.go:42: 07:00:29 | watcher-tls-certs-change/1-deploy-with-tlse | L1iYWnFugdfAycs2+uAkPMbpav3Oi3WJX1N1iKRklNv5Cq8oQ++b8aoR2RZPnJJE' - ' logger.go:42: 07:00:29 | watcher-tls-certs-change/1-deploy-with-tlse | ppmkEBU4Kt94NO5B/A2PnjkM8+yOQjhjVnyD5Ccp8kHx0MN7H1OW9J0m4JEPjP/c' - ' logger.go:42: 07:00:29 | watcher-tls-certs-change/1-deploy-with-tlse | uRTxWb4yoJl6Igdfy3tYgldteKztKgf4tDlkR/Htsf8GVL+zbsnTseYY5BakyygP' - ' logger.go:42: 07:00:29 | watcher-tls-certs-change/1-deploy-with-tlse | AHnxS0D1QCA5Zv5craVd+a2ZCNvbnSdl4pwShL796A0cNZGnAKDD1LwcYQ==' - ' logger.go:42: 07:00:29 | watcher-tls-certs-change/1-deploy-with-tlse | -----END CERTIFICATE-----'' '']''' - ' logger.go:42: 07:00:30 | watcher-tls-certs-change/1-deploy-with-tlse | running command: [sh -c set -euxo pipefail' - ' public_svc_cert=$(oc rsh -n $NAMESPACE -c watcher-api watcher-kuttl-api-0 cat /etc/pki/tls/certs/public.crt)' - ' public_secret_cert=$(oc get -n $NAMESPACE secret cert-watcher-public-svc -o jsonpath=''{.data.tls\.crt}'' | base64 --decode)' - ' # ensure that the svc secret and cert secret match' - ' if [ "${public_svc_cert}" != "${public_secret_cert}" ]; then' - ' exit 1' - ' fi' - ' ' - ' internal_svc_cert=$(oc rsh -n $NAMESPACE -c watcher-api watcher-kuttl-api-0 cat /etc/pki/tls/certs/internal.crt)' - ' internal_secret_cert=$(oc get -n $NAMESPACE secret cert-watcher-internal-svc -o jsonpath=''{.data.tls\.crt}'' | base64 --decode)' - ' # ensure that the svc secret and cert secret match' - ' if [ "${internal_svc_cert}" != "${internal_secret_cert}" ]; then' - ' exit 1' - ' fi' - ' ]' - ' logger.go:42: 07:00:30 | watcher-tls-certs-change/1-deploy-with-tlse | ++ oc rsh -n watcher-kuttl-default -c watcher-api watcher-kuttl-api-0 cat /etc/pki/tls/certs/public.crt' - ' logger.go:42: 07:00:30 | watcher-tls-certs-change/1-deploy-with-tlse | + public_svc_cert=''-----BEGIN CERTIFICATE-----' - ' logger.go:42: 07:00:30 | watcher-tls-certs-change/1-deploy-with-tlse | MIIDsjCCAhqgAwIBAgIQFVNmGwlW4BZq0NMjqP97cTANBgkqhkiG9w0BAQwFADAY' - ' logger.go:42: 07:00:30 | watcher-tls-certs-change/1-deploy-with-tlse | MRYwFAYDVQQDEw1yb290Y2EtcHVibGljMB4XDTI2MDEyMjA3MDAwN1oXDTMxMDEy' - ' logger.go:42: 07:00:30 | watcher-tls-certs-change/1-deploy-with-tlse | MTA3MDAwN1owADCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAMpNXlNH' - ' logger.go:42: 07:00:30 | watcher-tls-certs-change/1-deploy-with-tlse | +PgDkPuiPtY2Iy+iOVTneCp31gtcOAI8JIp8fgCvk7KwdimyxIUQ2cYxsZPnG0cy' - ' logger.go:42: 07:00:30 | watcher-tls-certs-change/1-deploy-with-tlse | wL4XpOQoqp0ElzP3DPP1FUOAnx7ctKdash5DUzBT+84oA2Hab8yd3D9RWVXgYTYP' - ' logger.go:42: 07:00:30 | watcher-tls-certs-change/1-deploy-with-tlse | tgSvfD60T1dJeiiJa5kBj0Gfj+0gKtYNueYrOo65hlC6Smd2tP/7av+xLtQ2b1JS' - ' logger.go:42: 07:00:30 | watcher-tls-certs-change/1-deploy-with-tlse | tFAWNznSY+2n+8SyIH6jjDb/oBYfANPOIcfU/4X8dTGzLgkpMMXK3KyXfzgD9EW+' - ' logger.go:42: 07:00:30 | watcher-tls-certs-change/1-deploy-with-tlse | HexV+da9g5Ix/I5gxku5bd0DWz/ovW+1o/uYMgB5ss9oqd8YKKSqkptXfAtoGToA' - ' logger.go:42: 07:00:30 | watcher-tls-certs-change/1-deploy-with-tlse | KG8QccenVwR7kEcCAwEAAaOBjzCBjDAOBgNVHQ8BAf8EBAMCBaAwEwYDVR0lBAww' - ' logger.go:42: 07:00:30 | watcher-tls-certs-change/1-deploy-with-tlse | CgYIKwYBBQUHAwEwDAYDVR0TAQH/BAIwADAfBgNVHSMEGDAWgBSy3xzAKuvmrfpX' - ' logger.go:42: 07:00:30 | watcher-tls-certs-change/1-deploy-with-tlse | EgmGJVm/ieLg8TA2BgNVHREBAf8ELDAqgih3YXRjaGVyLXB1YmxpYy53YXRjaGVy' - ' logger.go:42: 07:00:30 | watcher-tls-certs-change/1-deploy-with-tlse | LWt1dHRsLWRlZmF1bHQuc3ZjMA0GCSqGSIb3DQEBDAUAA4IBgQBmitaOVryjB1Fy' - ' logger.go:42: 07:00:30 | watcher-tls-certs-change/1-deploy-with-tlse | o+uQ7q1CfFEXu4uk57VffzLi6GFAot2u9gqfKm5Bx/JFKe0HvUHxfyum8oGhg8Mm' - ' logger.go:42: 07:00:30 | watcher-tls-certs-change/1-deploy-with-tlse | lF7bosLeepyWZeFYbXLZ3eCDrTyQRPPc8j6O8R9Ygg2wvIhwIrADK1/KTQuPVFW5' - ' logger.go:42: 07:00:30 | watcher-tls-certs-change/1-deploy-with-tlse | DVaC7MERErbmczp6DYiELdgcqlJimK3Vp/11qNtk1Kc7wVLI2Y36CzhwVMXOBW00' - ' logger.go:42: 07:00:30 | watcher-tls-certs-change/1-deploy-with-tlse | XXQxvYoWLH8WjhVMN9L0ZfTLdQIwL4gt0B3R7Uh8HR/qMCwW4RhIByG/el6wH1yh' - ' logger.go:42: 07:00:30 | watcher-tls-certs-change/1-deploy-with-tlse | D5gkY2Jkbf1vvmmw6MBLkx5O8aL145sW8fDKMGrmRj+eO5ALLeYMdmQZXqwZ4URw' - ' logger.go:42: 07:00:30 | watcher-tls-certs-change/1-deploy-with-tlse | X1tyjzN1ntrcBJf6xnuMLfT1v9RPMCT4CPeb23M5lqvwQXdrbiarkLQ9n/mBPUA6' - ' logger.go:42: 07:00:30 | watcher-tls-certs-change/1-deploy-with-tlse | lUYWBkFqYg9XnAgkWTeehQ7+AeOtvhj6gu3st6nEkOjJs9ri3Jris8kGL9Ay5yMy' - ' logger.go:42: 07:00:30 | watcher-tls-certs-change/1-deploy-with-tlse | af6ByIVzqJuVxr9GJiFIvhx+t2NEelQUiFnN1465Q67x/AELrfo=' - ' logger.go:42: 07:00:30 | watcher-tls-certs-change/1-deploy-with-tlse | -----END CERTIFICATE-----''' - ' logger.go:42: 07:00:30 | watcher-tls-certs-change/1-deploy-with-tlse | ++ base64 --decode' - ' logger.go:42: 07:00:30 | watcher-tls-certs-change/1-deploy-with-tlse | ++ oc get -n watcher-kuttl-default secret cert-watcher-public-svc -o ''jsonpath={.data.tls\.crt}''' - ' logger.go:42: 07:00:30 | watcher-tls-certs-change/1-deploy-with-tlse | + public_secret_cert=''-----BEGIN CERTIFICATE-----' - ' logger.go:42: 07:00:30 | watcher-tls-certs-change/1-deploy-with-tlse | MIIDsjCCAhqgAwIBAgIQFVNmGwlW4BZq0NMjqP97cTANBgkqhkiG9w0BAQwFADAY' - ' logger.go:42: 07:00:30 | watcher-tls-certs-change/1-deploy-with-tlse | MRYwFAYDVQQDEw1yb290Y2EtcHVibGljMB4XDTI2MDEyMjA3MDAwN1oXDTMxMDEy' - ' logger.go:42: 07:00:30 | watcher-tls-certs-change/1-deploy-with-tlse | MTA3MDAwN1owADCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAMpNXlNH' - ' logger.go:42: 07:00:30 | watcher-tls-certs-change/1-deploy-with-tlse | +PgDkPuiPtY2Iy+iOVTneCp31gtcOAI8JIp8fgCvk7KwdimyxIUQ2cYxsZPnG0cy' - ' logger.go:42: 07:00:30 | watcher-tls-certs-change/1-deploy-with-tlse | wL4XpOQoqp0ElzP3DPP1FUOAnx7ctKdash5DUzBT+84oA2Hab8yd3D9RWVXgYTYP' - ' logger.go:42: 07:00:30 | watcher-tls-certs-change/1-deploy-with-tlse | tgSvfD60T1dJeiiJa5kBj0Gfj+0gKtYNueYrOo65hlC6Smd2tP/7av+xLtQ2b1JS' - ' logger.go:42: 07:00:30 | watcher-tls-certs-change/1-deploy-with-tlse | tFAWNznSY+2n+8SyIH6jjDb/oBYfANPOIcfU/4X8dTGzLgkpMMXK3KyXfzgD9EW+' - ' logger.go:42: 07:00:30 | watcher-tls-certs-change/1-deploy-with-tlse | HexV+da9g5Ix/I5gxku5bd0DWz/ovW+1o/uYMgB5ss9oqd8YKKSqkptXfAtoGToA' - ' logger.go:42: 07:00:30 | watcher-tls-certs-change/1-deploy-with-tlse | KG8QccenVwR7kEcCAwEAAaOBjzCBjDAOBgNVHQ8BAf8EBAMCBaAwEwYDVR0lBAww' - ' logger.go:42: 07:00:30 | watcher-tls-certs-change/1-deploy-with-tlse | CgYIKwYBBQUHAwEwDAYDVR0TAQH/BAIwADAfBgNVHSMEGDAWgBSy3xzAKuvmrfpX' - ' logger.go:42: 07:00:30 | watcher-tls-certs-change/1-deploy-with-tlse | EgmGJVm/ieLg8TA2BgNVHREBAf8ELDAqgih3YXRjaGVyLXB1YmxpYy53YXRjaGVy' - ' logger.go:42: 07:00:30 | watcher-tls-certs-change/1-deploy-with-tlse | LWt1dHRsLWRlZmF1bHQuc3ZjMA0GCSqGSIb3DQEBDAUAA4IBgQBmitaOVryjB1Fy' - ' logger.go:42: 07:00:30 | watcher-tls-certs-change/1-deploy-with-tlse | o+uQ7q1CfFEXu4uk57VffzLi6GFAot2u9gqfKm5Bx/JFKe0HvUHxfyum8oGhg8Mm' - ' logger.go:42: 07:00:30 | watcher-tls-certs-change/1-deploy-with-tlse | lF7bosLeepyWZeFYbXLZ3eCDrTyQRPPc8j6O8R9Ygg2wvIhwIrADK1/KTQuPVFW5' - ' logger.go:42: 07:00:30 | watcher-tls-certs-change/1-deploy-with-tlse | DVaC7MERErbmczp6DYiELdgcqlJimK3Vp/11qNtk1Kc7wVLI2Y36CzhwVMXOBW00' - ' logger.go:42: 07:00:30 | watcher-tls-certs-change/1-deploy-with-tlse | XXQxvYoWLH8WjhVMN9L0ZfTLdQIwL4gt0B3R7Uh8HR/qMCwW4RhIByG/el6wH1yh' - ' logger.go:42: 07:00:30 | watcher-tls-certs-change/1-deploy-with-tlse | D5gkY2Jkbf1vvmmw6MBLkx5O8aL145sW8fDKMGrmRj+eO5ALLeYMdmQZXqwZ4URw' - ' logger.go:42: 07:00:30 | watcher-tls-certs-change/1-deploy-with-tlse | X1tyjzN1ntrcBJf6xnuMLfT1v9RPMCT4CPeb23M5lqvwQXdrbiarkLQ9n/mBPUA6' - ' logger.go:42: 07:00:30 | watcher-tls-certs-change/1-deploy-with-tlse | lUYWBkFqYg9XnAgkWTeehQ7+AeOtvhj6gu3st6nEkOjJs9ri3Jris8kGL9Ay5yMy' - ' logger.go:42: 07:00:30 | watcher-tls-certs-change/1-deploy-with-tlse | af6ByIVzqJuVxr9GJiFIvhx+t2NEelQUiFnN1465Q67x/AELrfo=' - ' logger.go:42: 07:00:30 | watcher-tls-certs-change/1-deploy-with-tlse | -----END CERTIFICATE-----''' - ' logger.go:42: 07:00:30 | watcher-tls-certs-change/1-deploy-with-tlse | + ''['' ''-----BEGIN CERTIFICATE-----' - ' logger.go:42: 07:00:30 | watcher-tls-certs-change/1-deploy-with-tlse | MIIDsjCCAhqgAwIBAgIQFVNmGwlW4BZq0NMjqP97cTANBgkqhkiG9w0BAQwFADAY' - ' logger.go:42: 07:00:30 | watcher-tls-certs-change/1-deploy-with-tlse | MRYwFAYDVQQDEw1yb290Y2EtcHVibGljMB4XDTI2MDEyMjA3MDAwN1oXDTMxMDEy' - ' logger.go:42: 07:00:30 | watcher-tls-certs-change/1-deploy-with-tlse | MTA3MDAwN1owADCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAMpNXlNH' - ' logger.go:42: 07:00:30 | watcher-tls-certs-change/1-deploy-with-tlse | +PgDkPuiPtY2Iy+iOVTneCp31gtcOAI8JIp8fgCvk7KwdimyxIUQ2cYxsZPnG0cy' - ' logger.go:42: 07:00:30 | watcher-tls-certs-change/1-deploy-with-tlse | wL4XpOQoqp0ElzP3DPP1FUOAnx7ctKdash5DUzBT+84oA2Hab8yd3D9RWVXgYTYP' - ' logger.go:42: 07:00:30 | watcher-tls-certs-change/1-deploy-with-tlse | tgSvfD60T1dJeiiJa5kBj0Gfj+0gKtYNueYrOo65hlC6Smd2tP/7av+xLtQ2b1JS' - ' logger.go:42: 07:00:30 | watcher-tls-certs-change/1-deploy-with-tlse | tFAWNznSY+2n+8SyIH6jjDb/oBYfANPOIcfU/4X8dTGzLgkpMMXK3KyXfzgD9EW+' - ' logger.go:42: 07:00:30 | watcher-tls-certs-change/1-deploy-with-tlse | HexV+da9g5Ix/I5gxku5bd0DWz/ovW+1o/uYMgB5ss9oqd8YKKSqkptXfAtoGToA' - ' logger.go:42: 07:00:30 | watcher-tls-certs-change/1-deploy-with-tlse | KG8QccenVwR7kEcCAwEAAaOBjzCBjDAOBgNVHQ8BAf8EBAMCBaAwEwYDVR0lBAww' - ' logger.go:42: 07:00:30 | watcher-tls-certs-change/1-deploy-with-tlse | CgYIKwYBBQUHAwEwDAYDVR0TAQH/BAIwADAfBgNVHSMEGDAWgBSy3xzAKuvmrfpX' - ' logger.go:42: 07:00:30 | watcher-tls-certs-change/1-deploy-with-tlse | EgmGJVm/ieLg8TA2BgNVHREBAf8ELDAqgih3YXRjaGVyLXB1YmxpYy53YXRjaGVy' - ' logger.go:42: 07:00:30 | watcher-tls-certs-change/1-deploy-with-tlse | LWt1dHRsLWRlZmF1bHQuc3ZjMA0GCSqGSIb3DQEBDAUAA4IBgQBmitaOVryjB1Fy' - ' logger.go:42: 07:00:30 | watcher-tls-certs-change/1-deploy-with-tlse | o+uQ7q1CfFEXu4uk57VffzLi6GFAot2u9gqfKm5Bx/JFKe0HvUHxfyum8oGhg8Mm' - ' logger.go:42: 07:00:30 | watcher-tls-certs-change/1-deploy-with-tlse | lF7bosLeepyWZeFYbXLZ3eCDrTyQRPPc8j6O8R9Ygg2wvIhwIrADK1/KTQuPVFW5' - ' logger.go:42: 07:00:30 | watcher-tls-certs-change/1-deploy-with-tlse | DVaC7MERErbmczp6DYiELdgcqlJimK3Vp/11qNtk1Kc7wVLI2Y36CzhwVMXOBW00' - ' logger.go:42: 07:00:30 | watcher-tls-certs-change/1-deploy-with-tlse | XXQxvYoWLH8WjhVMN9L0ZfTLdQIwL4gt0B3R7Uh8HR/qMCwW4RhIByG/el6wH1yh' - ' logger.go:42: 07:00:30 | watcher-tls-certs-change/1-deploy-with-tlse | D5gkY2Jkbf1vvmmw6MBLkx5O8aL145sW8fDKMGrmRj+eO5ALLeYMdmQZXqwZ4URw' - ' logger.go:42: 07:00:30 | watcher-tls-certs-change/1-deploy-with-tlse | X1tyjzN1ntrcBJf6xnuMLfT1v9RPMCT4CPeb23M5lqvwQXdrbiarkLQ9n/mBPUA6' - ' logger.go:42: 07:00:30 | watcher-tls-certs-change/1-deploy-with-tlse | lUYWBkFqYg9XnAgkWTeehQ7+AeOtvhj6gu3st6nEkOjJs9ri3Jris8kGL9Ay5yMy' - ' logger.go:42: 07:00:30 | watcher-tls-certs-change/1-deploy-with-tlse | af6ByIVzqJuVxr9GJiFIvhx+t2NEelQUiFnN1465Q67x/AELrfo=' - ' logger.go:42: 07:00:30 | watcher-tls-certs-change/1-deploy-with-tlse | -----END CERTIFICATE-----'' ''!='' ''-----BEGIN CERTIFICATE-----' - ' logger.go:42: 07:00:30 | watcher-tls-certs-change/1-deploy-with-tlse | MIIDsjCCAhqgAwIBAgIQFVNmGwlW4BZq0NMjqP97cTANBgkqhkiG9w0BAQwFADAY' - ' logger.go:42: 07:00:30 | watcher-tls-certs-change/1-deploy-with-tlse | MRYwFAYDVQQDEw1yb290Y2EtcHVibGljMB4XDTI2MDEyMjA3MDAwN1oXDTMxMDEy' - ' logger.go:42: 07:00:30 | watcher-tls-certs-change/1-deploy-with-tlse | MTA3MDAwN1owADCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAMpNXlNH' - ' logger.go:42: 07:00:30 | watcher-tls-certs-change/1-deploy-with-tlse | +PgDkPuiPtY2Iy+iOVTneCp31gtcOAI8JIp8fgCvk7KwdimyxIUQ2cYxsZPnG0cy' - ' logger.go:42: 07:00:30 | watcher-tls-certs-change/1-deploy-with-tlse | wL4XpOQoqp0ElzP3DPP1FUOAnx7ctKdash5DUzBT+84oA2Hab8yd3D9RWVXgYTYP' - ' logger.go:42: 07:00:30 | watcher-tls-certs-change/1-deploy-with-tlse | tgSvfD60T1dJeiiJa5kBj0Gfj+0gKtYNueYrOo65hlC6Smd2tP/7av+xLtQ2b1JS' - ' logger.go:42: 07:00:30 | watcher-tls-certs-change/1-deploy-with-tlse | tFAWNznSY+2n+8SyIH6jjDb/oBYfANPOIcfU/4X8dTGzLgkpMMXK3KyXfzgD9EW+' - ' logger.go:42: 07:00:30 | watcher-tls-certs-change/1-deploy-with-tlse | HexV+da9g5Ix/I5gxku5bd0DWz/ovW+1o/uYMgB5ss9oqd8YKKSqkptXfAtoGToA' - ' logger.go:42: 07:00:30 | watcher-tls-certs-change/1-deploy-with-tlse | KG8QccenVwR7kEcCAwEAAaOBjzCBjDAOBgNVHQ8BAf8EBAMCBaAwEwYDVR0lBAww' - ' logger.go:42: 07:00:30 | watcher-tls-certs-change/1-deploy-with-tlse | CgYIKwYBBQUHAwEwDAYDVR0TAQH/BAIwADAfBgNVHSMEGDAWgBSy3xzAKuvmrfpX' - ' logger.go:42: 07:00:30 | watcher-tls-certs-change/1-deploy-with-tlse | EgmGJVm/ieLg8TA2BgNVHREBAf8ELDAqgih3YXRjaGVyLXB1YmxpYy53YXRjaGVy' - ' logger.go:42: 07:00:30 | watcher-tls-certs-change/1-deploy-with-tlse | LWt1dHRsLWRlZmF1bHQuc3ZjMA0GCSqGSIb3DQEBDAUAA4IBgQBmitaOVryjB1Fy' - ' logger.go:42: 07:00:30 | watcher-tls-certs-change/1-deploy-with-tlse | o+uQ7q1CfFEXu4uk57VffzLi6GFAot2u9gqfKm5Bx/JFKe0HvUHxfyum8oGhg8Mm' - ' logger.go:42: 07:00:30 | watcher-tls-certs-change/1-deploy-with-tlse | lF7bosLeepyWZeFYbXLZ3eCDrTyQRPPc8j6O8R9Ygg2wvIhwIrADK1/KTQuPVFW5' - ' logger.go:42: 07:00:30 | watcher-tls-certs-change/1-deploy-with-tlse | DVaC7MERErbmczp6DYiELdgcqlJimK3Vp/11qNtk1Kc7wVLI2Y36CzhwVMXOBW00' - ' logger.go:42: 07:00:30 | watcher-tls-certs-change/1-deploy-with-tlse | XXQxvYoWLH8WjhVMN9L0ZfTLdQIwL4gt0B3R7Uh8HR/qMCwW4RhIByG/el6wH1yh' - ' logger.go:42: 07:00:30 | watcher-tls-certs-change/1-deploy-with-tlse | D5gkY2Jkbf1vvmmw6MBLkx5O8aL145sW8fDKMGrmRj+eO5ALLeYMdmQZXqwZ4URw' - ' logger.go:42: 07:00:30 | watcher-tls-certs-change/1-deploy-with-tlse | X1tyjzN1ntrcBJf6xnuMLfT1v9RPMCT4CPeb23M5lqvwQXdrbiarkLQ9n/mBPUA6' - ' logger.go:42: 07:00:30 | watcher-tls-certs-change/1-deploy-with-tlse | lUYWBkFqYg9XnAgkWTeehQ7+AeOtvhj6gu3st6nEkOjJs9ri3Jris8kGL9Ay5yMy' - ' logger.go:42: 07:00:30 | watcher-tls-certs-change/1-deploy-with-tlse | af6ByIVzqJuVxr9GJiFIvhx+t2NEelQUiFnN1465Q67x/AELrfo=' - ' logger.go:42: 07:00:30 | watcher-tls-certs-change/1-deploy-with-tlse | -----END CERTIFICATE-----'' '']''' - ' logger.go:42: 07:00:30 | watcher-tls-certs-change/1-deploy-with-tlse | ++ oc rsh -n watcher-kuttl-default -c watcher-api watcher-kuttl-api-0 cat /etc/pki/tls/certs/internal.crt' - ' logger.go:42: 07:00:31 | watcher-tls-certs-change/1-deploy-with-tlse | + internal_svc_cert=''-----BEGIN CERTIFICATE-----' - ' logger.go:42: 07:00:31 | watcher-tls-certs-change/1-deploy-with-tlse | MIIDtzCCAh+gAwIBAgIRAMdtA66/UVErgQ6jB2Mo7g8wDQYJKoZIhvcNAQEMBQAw' - ' logger.go:42: 07:00:31 | watcher-tls-certs-change/1-deploy-with-tlse | GjEYMBYGA1UEAxMPcm9vdGNhLWludGVybmFsMB4XDTI2MDEyMjA3MDAwN1oXDTMx' - ' logger.go:42: 07:00:31 | watcher-tls-certs-change/1-deploy-with-tlse | MDEyMTA3MDAwN1owADCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAOzY' - ' logger.go:42: 07:00:31 | watcher-tls-certs-change/1-deploy-with-tlse | hUVeRLlVB60sJbdpaQAJI2vpbeEEDa91MsWoYlWF58nvtb17k8KGvpPNemDF4UvD' - ' logger.go:42: 07:00:31 | watcher-tls-certs-change/1-deploy-with-tlse | kRm+oSlbUhWwlbwI1Q3VFz4JhvU12PvdUhV4e5PbNGULUDGpAbr04w098HEQSUj+' - ' logger.go:42: 07:00:31 | watcher-tls-certs-change/1-deploy-with-tlse | D/44rQFA0PbvLa//wISJcF6NG2kZiiAFAlrfYJH5tJM2nbdDOvHRY2FScNwLw3aE' - ' logger.go:42: 07:00:31 | watcher-tls-certs-change/1-deploy-with-tlse | 1UGIp9tmMA1on0l+9tBcndNzSq8klq5TdwDo8Pwrqz8m37ZXvwD1gizzDXwZhiPN' - ' logger.go:42: 07:00:31 | watcher-tls-certs-change/1-deploy-with-tlse | KIvLKp0hAxvswvj5Rib80zGQX6JjJEcOBFZ4GT/e8WUjrsJ6JsTygAtF9Y8xwchO' - ' logger.go:42: 07:00:31 | watcher-tls-certs-change/1-deploy-with-tlse | Ce4xdKeeGZ8hDK2ojFcCAwEAAaOBkTCBjjAOBgNVHQ8BAf8EBAMCBaAwEwYDVR0l' - ' logger.go:42: 07:00:31 | watcher-tls-certs-change/1-deploy-with-tlse | BAwwCgYIKwYBBQUHAwEwDAYDVR0TAQH/BAIwADAfBgNVHSMEGDAWgBSEvgxWYqXo' - ' logger.go:42: 07:00:31 | watcher-tls-certs-change/1-deploy-with-tlse | VaasRUQLJzZAsO/HJTA4BgNVHREBAf8ELjAsgip3YXRjaGVyLWludGVybmFsLndh' - ' logger.go:42: 07:00:31 | watcher-tls-certs-change/1-deploy-with-tlse | dGNoZXIta3V0dGwtZGVmYXVsdC5zdmMwDQYJKoZIhvcNAQEMBQADggGBAJzAl8kI' - ' logger.go:42: 07:00:31 | watcher-tls-certs-change/1-deploy-with-tlse | fRCmnFBnPIq9+3XfefG0pybCCtUusBuoATJYDHmH/379Rb4zeN6bqXg7PP36qVao' - ' logger.go:42: 07:00:31 | watcher-tls-certs-change/1-deploy-with-tlse | 98lAd5LFEieOBPJq2+QXo8CWrLcbItzbOHdFeVStfpuQqaD2OMYLLepP/IpAKT+0' - ' logger.go:42: 07:00:31 | watcher-tls-certs-change/1-deploy-with-tlse | +GEHqjJsaHF3xj6g5HcvQzhSaQh3Pg8ffKmqJKQg95PF1BPQ110MawKYh9MBibPr' - ' logger.go:42: 07:00:31 | watcher-tls-certs-change/1-deploy-with-tlse | b0myrALIel2CeIeWqm+VMrMI8en/gFxJWTfg58GcQbRiUDzWa2BHE1mEoXmOLNDt' - ' logger.go:42: 07:00:31 | watcher-tls-certs-change/1-deploy-with-tlse | L1iYWnFugdfAycs2+uAkPMbpav3Oi3WJX1N1iKRklNv5Cq8oQ++b8aoR2RZPnJJE' - ' logger.go:42: 07:00:31 | watcher-tls-certs-change/1-deploy-with-tlse | ppmkEBU4Kt94NO5B/A2PnjkM8+yOQjhjVnyD5Ccp8kHx0MN7H1OW9J0m4JEPjP/c' - ' logger.go:42: 07:00:31 | watcher-tls-certs-change/1-deploy-with-tlse | uRTxWb4yoJl6Igdfy3tYgldteKztKgf4tDlkR/Htsf8GVL+zbsnTseYY5BakyygP' - ' logger.go:42: 07:00:31 | watcher-tls-certs-change/1-deploy-with-tlse | AHnxS0D1QCA5Zv5craVd+a2ZCNvbnSdl4pwShL796A0cNZGnAKDD1LwcYQ==' - ' logger.go:42: 07:00:31 | watcher-tls-certs-change/1-deploy-with-tlse | -----END CERTIFICATE-----''' - ' logger.go:42: 07:00:31 | watcher-tls-certs-change/1-deploy-with-tlse | ++ oc get -n watcher-kuttl-default secret cert-watcher-internal-svc -o ''jsonpath={.data.tls\.crt}''' - ' logger.go:42: 07:00:31 | watcher-tls-certs-change/1-deploy-with-tlse | ++ base64 --decode' - ' logger.go:42: 07:00:31 | watcher-tls-certs-change/1-deploy-with-tlse | + internal_secret_cert=''-----BEGIN CERTIFICATE-----' - ' logger.go:42: 07:00:31 | watcher-tls-certs-change/1-deploy-with-tlse | MIIDtzCCAh+gAwIBAgIRAMdtA66/UVErgQ6jB2Mo7g8wDQYJKoZIhvcNAQEMBQAw' - ' logger.go:42: 07:00:31 | watcher-tls-certs-change/1-deploy-with-tlse | GjEYMBYGA1UEAxMPcm9vdGNhLWludGVybmFsMB4XDTI2MDEyMjA3MDAwN1oXDTMx' - ' logger.go:42: 07:00:31 | watcher-tls-certs-change/1-deploy-with-tlse | MDEyMTA3MDAwN1owADCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAOzY' - ' logger.go:42: 07:00:31 | watcher-tls-certs-change/1-deploy-with-tlse | hUVeRLlVB60sJbdpaQAJI2vpbeEEDa91MsWoYlWF58nvtb17k8KGvpPNemDF4UvD' - ' logger.go:42: 07:00:31 | watcher-tls-certs-change/1-deploy-with-tlse | kRm+oSlbUhWwlbwI1Q3VFz4JhvU12PvdUhV4e5PbNGULUDGpAbr04w098HEQSUj+' - ' logger.go:42: 07:00:31 | watcher-tls-certs-change/1-deploy-with-tlse | D/44rQFA0PbvLa//wISJcF6NG2kZiiAFAlrfYJH5tJM2nbdDOvHRY2FScNwLw3aE' - ' logger.go:42: 07:00:31 | watcher-tls-certs-change/1-deploy-with-tlse | 1UGIp9tmMA1on0l+9tBcndNzSq8klq5TdwDo8Pwrqz8m37ZXvwD1gizzDXwZhiPN' - ' logger.go:42: 07:00:31 | watcher-tls-certs-change/1-deploy-with-tlse | KIvLKp0hAxvswvj5Rib80zGQX6JjJEcOBFZ4GT/e8WUjrsJ6JsTygAtF9Y8xwchO' - ' logger.go:42: 07:00:31 | watcher-tls-certs-change/1-deploy-with-tlse | Ce4xdKeeGZ8hDK2ojFcCAwEAAaOBkTCBjjAOBgNVHQ8BAf8EBAMCBaAwEwYDVR0l' - ' logger.go:42: 07:00:31 | watcher-tls-certs-change/1-deploy-with-tlse | BAwwCgYIKwYBBQUHAwEwDAYDVR0TAQH/BAIwADAfBgNVHSMEGDAWgBSEvgxWYqXo' - ' logger.go:42: 07:00:31 | watcher-tls-certs-change/1-deploy-with-tlse | VaasRUQLJzZAsO/HJTA4BgNVHREBAf8ELjAsgip3YXRjaGVyLWludGVybmFsLndh' - ' logger.go:42: 07:00:31 | watcher-tls-certs-change/1-deploy-with-tlse | dGNoZXIta3V0dGwtZGVmYXVsdC5zdmMwDQYJKoZIhvcNAQEMBQADggGBAJzAl8kI' - ' logger.go:42: 07:00:31 | watcher-tls-certs-change/1-deploy-with-tlse | fRCmnFBnPIq9+3XfefG0pybCCtUusBuoATJYDHmH/379Rb4zeN6bqXg7PP36qVao' - ' logger.go:42: 07:00:31 | watcher-tls-certs-change/1-deploy-with-tlse | 98lAd5LFEieOBPJq2+QXo8CWrLcbItzbOHdFeVStfpuQqaD2OMYLLepP/IpAKT+0' - ' logger.go:42: 07:00:31 | watcher-tls-certs-change/1-deploy-with-tlse | +GEHqjJsaHF3xj6g5HcvQzhSaQh3Pg8ffKmqJKQg95PF1BPQ110MawKYh9MBibPr' - ' logger.go:42: 07:00:31 | watcher-tls-certs-change/1-deploy-with-tlse | b0myrALIel2CeIeWqm+VMrMI8en/gFxJWTfg58GcQbRiUDzWa2BHE1mEoXmOLNDt' - ' logger.go:42: 07:00:31 | watcher-tls-certs-change/1-deploy-with-tlse | L1iYWnFugdfAycs2+uAkPMbpav3Oi3WJX1N1iKRklNv5Cq8oQ++b8aoR2RZPnJJE' - ' logger.go:42: 07:00:31 | watcher-tls-certs-change/1-deploy-with-tlse | ppmkEBU4Kt94NO5B/A2PnjkM8+yOQjhjVnyD5Ccp8kHx0MN7H1OW9J0m4JEPjP/c' - ' logger.go:42: 07:00:31 | watcher-tls-certs-change/1-deploy-with-tlse | uRTxWb4yoJl6Igdfy3tYgldteKztKgf4tDlkR/Htsf8GVL+zbsnTseYY5BakyygP' - ' logger.go:42: 07:00:31 | watcher-tls-certs-change/1-deploy-with-tlse | AHnxS0D1QCA5Zv5craVd+a2ZCNvbnSdl4pwShL796A0cNZGnAKDD1LwcYQ==' - ' logger.go:42: 07:00:31 | watcher-tls-certs-change/1-deploy-with-tlse | -----END CERTIFICATE-----''' - ' logger.go:42: 07:00:31 | watcher-tls-certs-change/1-deploy-with-tlse | + ''['' ''-----BEGIN CERTIFICATE-----' - ' logger.go:42: 07:00:31 | watcher-tls-certs-change/1-deploy-with-tlse | MIIDtzCCAh+gAwIBAgIRAMdtA66/UVErgQ6jB2Mo7g8wDQYJKoZIhvcNAQEMBQAw' - ' logger.go:42: 07:00:31 | watcher-tls-certs-change/1-deploy-with-tlse | GjEYMBYGA1UEAxMPcm9vdGNhLWludGVybmFsMB4XDTI2MDEyMjA3MDAwN1oXDTMx' - ' logger.go:42: 07:00:31 | watcher-tls-certs-change/1-deploy-with-tlse | MDEyMTA3MDAwN1owADCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAOzY' - ' logger.go:42: 07:00:31 | watcher-tls-certs-change/1-deploy-with-tlse | hUVeRLlVB60sJbdpaQAJI2vpbeEEDa91MsWoYlWF58nvtb17k8KGvpPNemDF4UvD' - ' logger.go:42: 07:00:31 | watcher-tls-certs-change/1-deploy-with-tlse | kRm+oSlbUhWwlbwI1Q3VFz4JhvU12PvdUhV4e5PbNGULUDGpAbr04w098HEQSUj+' - ' logger.go:42: 07:00:31 | watcher-tls-certs-change/1-deploy-with-tlse | D/44rQFA0PbvLa//wISJcF6NG2kZiiAFAlrfYJH5tJM2nbdDOvHRY2FScNwLw3aE' - ' logger.go:42: 07:00:31 | watcher-tls-certs-change/1-deploy-with-tlse | 1UGIp9tmMA1on0l+9tBcndNzSq8klq5TdwDo8Pwrqz8m37ZXvwD1gizzDXwZhiPN' - ' logger.go:42: 07:00:31 | watcher-tls-certs-change/1-deploy-with-tlse | KIvLKp0hAxvswvj5Rib80zGQX6JjJEcOBFZ4GT/e8WUjrsJ6JsTygAtF9Y8xwchO' - ' logger.go:42: 07:00:31 | watcher-tls-certs-change/1-deploy-with-tlse | Ce4xdKeeGZ8hDK2ojFcCAwEAAaOBkTCBjjAOBgNVHQ8BAf8EBAMCBaAwEwYDVR0l' - ' logger.go:42: 07:00:31 | watcher-tls-certs-change/1-deploy-with-tlse | BAwwCgYIKwYBBQUHAwEwDAYDVR0TAQH/BAIwADAfBgNVHSMEGDAWgBSEvgxWYqXo' - ' logger.go:42: 07:00:31 | watcher-tls-certs-change/1-deploy-with-tlse | VaasRUQLJzZAsO/HJTA4BgNVHREBAf8ELjAsgip3YXRjaGVyLWludGVybmFsLndh' - ' logger.go:42: 07:00:31 | watcher-tls-certs-change/1-deploy-with-tlse | dGNoZXIta3V0dGwtZGVmYXVsdC5zdmMwDQYJKoZIhvcNAQEMBQADggGBAJzAl8kI' - ' logger.go:42: 07:00:31 | watcher-tls-certs-change/1-deploy-with-tlse | fRCmnFBnPIq9+3XfefG0pybCCtUusBuoATJYDHmH/379Rb4zeN6bqXg7PP36qVao' - ' logger.go:42: 07:00:31 | watcher-tls-certs-change/1-deploy-with-tlse | 98lAd5LFEieOBPJq2+QXo8CWrLcbItzbOHdFeVStfpuQqaD2OMYLLepP/IpAKT+0' - ' logger.go:42: 07:00:31 | watcher-tls-certs-change/1-deploy-with-tlse | +GEHqjJsaHF3xj6g5HcvQzhSaQh3Pg8ffKmqJKQg95PF1BPQ110MawKYh9MBibPr' - ' logger.go:42: 07:00:31 | watcher-tls-certs-change/1-deploy-with-tlse | b0myrALIel2CeIeWqm+VMrMI8en/gFxJWTfg58GcQbRiUDzWa2BHE1mEoXmOLNDt' - ' logger.go:42: 07:00:31 | watcher-tls-certs-change/1-deploy-with-tlse | L1iYWnFugdfAycs2+uAkPMbpav3Oi3WJX1N1iKRklNv5Cq8oQ++b8aoR2RZPnJJE' - ' logger.go:42: 07:00:31 | watcher-tls-certs-change/1-deploy-with-tlse | ppmkEBU4Kt94NO5B/A2PnjkM8+yOQjhjVnyD5Ccp8kHx0MN7H1OW9J0m4JEPjP/c' - ' logger.go:42: 07:00:31 | watcher-tls-certs-change/1-deploy-with-tlse | uRTxWb4yoJl6Igdfy3tYgldteKztKgf4tDlkR/Htsf8GVL+zbsnTseYY5BakyygP' - ' logger.go:42: 07:00:31 | watcher-tls-certs-change/1-deploy-with-tlse | AHnxS0D1QCA5Zv5craVd+a2ZCNvbnSdl4pwShL796A0cNZGnAKDD1LwcYQ==' - ' logger.go:42: 07:00:31 | watcher-tls-certs-change/1-deploy-with-tlse | -----END CERTIFICATE-----'' ''!='' ''-----BEGIN CERTIFICATE-----' - ' logger.go:42: 07:00:31 | watcher-tls-certs-change/1-deploy-with-tlse | MIIDtzCCAh+gAwIBAgIRAMdtA66/UVErgQ6jB2Mo7g8wDQYJKoZIhvcNAQEMBQAw' - ' logger.go:42: 07:00:31 | watcher-tls-certs-change/1-deploy-with-tlse | GjEYMBYGA1UEAxMPcm9vdGNhLWludGVybmFsMB4XDTI2MDEyMjA3MDAwN1oXDTMx' - ' logger.go:42: 07:00:31 | watcher-tls-certs-change/1-deploy-with-tlse | MDEyMTA3MDAwN1owADCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAOzY' - ' logger.go:42: 07:00:31 | watcher-tls-certs-change/1-deploy-with-tlse | hUVeRLlVB60sJbdpaQAJI2vpbeEEDa91MsWoYlWF58nvtb17k8KGvpPNemDF4UvD' - ' logger.go:42: 07:00:31 | watcher-tls-certs-change/1-deploy-with-tlse | kRm+oSlbUhWwlbwI1Q3VFz4JhvU12PvdUhV4e5PbNGULUDGpAbr04w098HEQSUj+' - ' logger.go:42: 07:00:31 | watcher-tls-certs-change/1-deploy-with-tlse | D/44rQFA0PbvLa//wISJcF6NG2kZiiAFAlrfYJH5tJM2nbdDOvHRY2FScNwLw3aE' - ' logger.go:42: 07:00:31 | watcher-tls-certs-change/1-deploy-with-tlse | 1UGIp9tmMA1on0l+9tBcndNzSq8klq5TdwDo8Pwrqz8m37ZXvwD1gizzDXwZhiPN' - ' logger.go:42: 07:00:31 | watcher-tls-certs-change/1-deploy-with-tlse | KIvLKp0hAxvswvj5Rib80zGQX6JjJEcOBFZ4GT/e8WUjrsJ6JsTygAtF9Y8xwchO' - ' logger.go:42: 07:00:31 | watcher-tls-certs-change/1-deploy-with-tlse | Ce4xdKeeGZ8hDK2ojFcCAwEAAaOBkTCBjjAOBgNVHQ8BAf8EBAMCBaAwEwYDVR0l' - ' logger.go:42: 07:00:31 | watcher-tls-certs-change/1-deploy-with-tlse | BAwwCgYIKwYBBQUHAwEwDAYDVR0TAQH/BAIwADAfBgNVHSMEGDAWgBSEvgxWYqXo' - ' logger.go:42: 07:00:31 | watcher-tls-certs-change/1-deploy-with-tlse | VaasRUQLJzZAsO/HJTA4BgNVHREBAf8ELjAsgip3YXRjaGVyLWludGVybmFsLndh' - ' logger.go:42: 07:00:31 | watcher-tls-certs-change/1-deploy-with-tlse | dGNoZXIta3V0dGwtZGVmYXVsdC5zdmMwDQYJKoZIhvcNAQEMBQADggGBAJzAl8kI' - ' logger.go:42: 07:00:31 | watcher-tls-certs-change/1-deploy-with-tlse | fRCmnFBnPIq9+3XfefG0pybCCtUusBuoATJYDHmH/379Rb4zeN6bqXg7PP36qVao' - ' logger.go:42: 07:00:31 | watcher-tls-certs-change/1-deploy-with-tlse | 98lAd5LFEieOBPJq2+QXo8CWrLcbItzbOHdFeVStfpuQqaD2OMYLLepP/IpAKT+0' - ' logger.go:42: 07:00:31 | watcher-tls-certs-change/1-deploy-with-tlse | +GEHqjJsaHF3xj6g5HcvQzhSaQh3Pg8ffKmqJKQg95PF1BPQ110MawKYh9MBibPr' - ' logger.go:42: 07:00:31 | watcher-tls-certs-change/1-deploy-with-tlse | b0myrALIel2CeIeWqm+VMrMI8en/gFxJWTfg58GcQbRiUDzWa2BHE1mEoXmOLNDt' - ' logger.go:42: 07:00:31 | watcher-tls-certs-change/1-deploy-with-tlse | L1iYWnFugdfAycs2+uAkPMbpav3Oi3WJX1N1iKRklNv5Cq8oQ++b8aoR2RZPnJJE' - ' logger.go:42: 07:00:31 | watcher-tls-certs-change/1-deploy-with-tlse | ppmkEBU4Kt94NO5B/A2PnjkM8+yOQjhjVnyD5Ccp8kHx0MN7H1OW9J0m4JEPjP/c' - ' logger.go:42: 07:00:31 | watcher-tls-certs-change/1-deploy-with-tlse | uRTxWb4yoJl6Igdfy3tYgldteKztKgf4tDlkR/Htsf8GVL+zbsnTseYY5BakyygP' - ' logger.go:42: 07:00:31 | watcher-tls-certs-change/1-deploy-with-tlse | AHnxS0D1QCA5Zv5craVd+a2ZCNvbnSdl4pwShL796A0cNZGnAKDD1LwcYQ==' - ' logger.go:42: 07:00:31 | watcher-tls-certs-change/1-deploy-with-tlse | -----END CERTIFICATE-----'' '']''' - ' logger.go:42: 07:00:32 | watcher-tls-certs-change/1-deploy-with-tlse | running command: [sh -c set -euxo pipefail' - ' public_svc_cert=$(oc rsh -n $NAMESPACE -c watcher-api watcher-kuttl-api-0 cat /etc/pki/tls/certs/public.crt)' - ' public_secret_cert=$(oc get -n $NAMESPACE secret cert-watcher-public-svc -o jsonpath=''{.data.tls\.crt}'' | base64 --decode)' - ' # ensure that the svc secret and cert secret match' - ' if [ "${public_svc_cert}" != "${public_secret_cert}" ]; then' - ' exit 1' - ' fi' - ' ' - ' internal_svc_cert=$(oc rsh -n $NAMESPACE -c watcher-api watcher-kuttl-api-0 cat /etc/pki/tls/certs/internal.crt)' - ' internal_secret_cert=$(oc get -n $NAMESPACE secret cert-watcher-internal-svc -o jsonpath=''{.data.tls\.crt}'' | base64 --decode)' - ' # ensure that the svc secret and cert secret match' - ' if [ "${internal_svc_cert}" != "${internal_secret_cert}" ]; then' - ' exit 1' - ' fi' - ' ]' - ' logger.go:42: 07:00:32 | watcher-tls-certs-change/1-deploy-with-tlse | ++ oc rsh -n watcher-kuttl-default -c watcher-api watcher-kuttl-api-0 cat /etc/pki/tls/certs/public.crt' - ' logger.go:42: 07:00:32 | watcher-tls-certs-change/1-deploy-with-tlse | + public_svc_cert=''-----BEGIN CERTIFICATE-----' - ' logger.go:42: 07:00:32 | watcher-tls-certs-change/1-deploy-with-tlse | MIIDsjCCAhqgAwIBAgIQFVNmGwlW4BZq0NMjqP97cTANBgkqhkiG9w0BAQwFADAY' - ' logger.go:42: 07:00:32 | watcher-tls-certs-change/1-deploy-with-tlse | MRYwFAYDVQQDEw1yb290Y2EtcHVibGljMB4XDTI2MDEyMjA3MDAwN1oXDTMxMDEy' - ' logger.go:42: 07:00:32 | watcher-tls-certs-change/1-deploy-with-tlse | MTA3MDAwN1owADCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAMpNXlNH' - ' logger.go:42: 07:00:32 | watcher-tls-certs-change/1-deploy-with-tlse | +PgDkPuiPtY2Iy+iOVTneCp31gtcOAI8JIp8fgCvk7KwdimyxIUQ2cYxsZPnG0cy' - ' logger.go:42: 07:00:32 | watcher-tls-certs-change/1-deploy-with-tlse | wL4XpOQoqp0ElzP3DPP1FUOAnx7ctKdash5DUzBT+84oA2Hab8yd3D9RWVXgYTYP' - ' logger.go:42: 07:00:32 | watcher-tls-certs-change/1-deploy-with-tlse | tgSvfD60T1dJeiiJa5kBj0Gfj+0gKtYNueYrOo65hlC6Smd2tP/7av+xLtQ2b1JS' - ' logger.go:42: 07:00:32 | watcher-tls-certs-change/1-deploy-with-tlse | tFAWNznSY+2n+8SyIH6jjDb/oBYfANPOIcfU/4X8dTGzLgkpMMXK3KyXfzgD9EW+' - ' logger.go:42: 07:00:32 | watcher-tls-certs-change/1-deploy-with-tlse | HexV+da9g5Ix/I5gxku5bd0DWz/ovW+1o/uYMgB5ss9oqd8YKKSqkptXfAtoGToA' - ' logger.go:42: 07:00:32 | watcher-tls-certs-change/1-deploy-with-tlse | KG8QccenVwR7kEcCAwEAAaOBjzCBjDAOBgNVHQ8BAf8EBAMCBaAwEwYDVR0lBAww' - ' logger.go:42: 07:00:32 | watcher-tls-certs-change/1-deploy-with-tlse | CgYIKwYBBQUHAwEwDAYDVR0TAQH/BAIwADAfBgNVHSMEGDAWgBSy3xzAKuvmrfpX' - ' logger.go:42: 07:00:32 | watcher-tls-certs-change/1-deploy-with-tlse | EgmGJVm/ieLg8TA2BgNVHREBAf8ELDAqgih3YXRjaGVyLXB1YmxpYy53YXRjaGVy' - ' logger.go:42: 07:00:32 | watcher-tls-certs-change/1-deploy-with-tlse | LWt1dHRsLWRlZmF1bHQuc3ZjMA0GCSqGSIb3DQEBDAUAA4IBgQBmitaOVryjB1Fy' - ' logger.go:42: 07:00:32 | watcher-tls-certs-change/1-deploy-with-tlse | o+uQ7q1CfFEXu4uk57VffzLi6GFAot2u9gqfKm5Bx/JFKe0HvUHxfyum8oGhg8Mm' - ' logger.go:42: 07:00:32 | watcher-tls-certs-change/1-deploy-with-tlse | lF7bosLeepyWZeFYbXLZ3eCDrTyQRPPc8j6O8R9Ygg2wvIhwIrADK1/KTQuPVFW5' - ' logger.go:42: 07:00:32 | watcher-tls-certs-change/1-deploy-with-tlse | DVaC7MERErbmczp6DYiELdgcqlJimK3Vp/11qNtk1Kc7wVLI2Y36CzhwVMXOBW00' - ' logger.go:42: 07:00:32 | watcher-tls-certs-change/1-deploy-with-tlse | XXQxvYoWLH8WjhVMN9L0ZfTLdQIwL4gt0B3R7Uh8HR/qMCwW4RhIByG/el6wH1yh' - ' logger.go:42: 07:00:32 | watcher-tls-certs-change/1-deploy-with-tlse | D5gkY2Jkbf1vvmmw6MBLkx5O8aL145sW8fDKMGrmRj+eO5ALLeYMdmQZXqwZ4URw' - ' logger.go:42: 07:00:32 | watcher-tls-certs-change/1-deploy-with-tlse | X1tyjzN1ntrcBJf6xnuMLfT1v9RPMCT4CPeb23M5lqvwQXdrbiarkLQ9n/mBPUA6' - ' logger.go:42: 07:00:32 | watcher-tls-certs-change/1-deploy-with-tlse | lUYWBkFqYg9XnAgkWTeehQ7+AeOtvhj6gu3st6nEkOjJs9ri3Jris8kGL9Ay5yMy' - ' logger.go:42: 07:00:32 | watcher-tls-certs-change/1-deploy-with-tlse | af6ByIVzqJuVxr9GJiFIvhx+t2NEelQUiFnN1465Q67x/AELrfo=' - ' logger.go:42: 07:00:32 | watcher-tls-certs-change/1-deploy-with-tlse | -----END CERTIFICATE-----''' - ' logger.go:42: 07:00:32 | watcher-tls-certs-change/1-deploy-with-tlse | ++ oc get -n watcher-kuttl-default secret cert-watcher-public-svc -o ''jsonpath={.data.tls\.crt}''' - ' logger.go:42: 07:00:32 | watcher-tls-certs-change/1-deploy-with-tlse | ++ base64 --decode' - ' logger.go:42: 07:00:32 | watcher-tls-certs-change/1-deploy-with-tlse | + public_secret_cert=''-----BEGIN CERTIFICATE-----' - ' logger.go:42: 07:00:32 | watcher-tls-certs-change/1-deploy-with-tlse | MIIDsjCCAhqgAwIBAgIQFVNmGwlW4BZq0NMjqP97cTANBgkqhkiG9w0BAQwFADAY' - ' logger.go:42: 07:00:32 | watcher-tls-certs-change/1-deploy-with-tlse | MRYwFAYDVQQDEw1yb290Y2EtcHVibGljMB4XDTI2MDEyMjA3MDAwN1oXDTMxMDEy' - ' logger.go:42: 07:00:32 | watcher-tls-certs-change/1-deploy-with-tlse | MTA3MDAwN1owADCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAMpNXlNH' - ' logger.go:42: 07:00:32 | watcher-tls-certs-change/1-deploy-with-tlse | +PgDkPuiPtY2Iy+iOVTneCp31gtcOAI8JIp8fgCvk7KwdimyxIUQ2cYxsZPnG0cy' - ' logger.go:42: 07:00:32 | watcher-tls-certs-change/1-deploy-with-tlse | wL4XpOQoqp0ElzP3DPP1FUOAnx7ctKdash5DUzBT+84oA2Hab8yd3D9RWVXgYTYP' - ' logger.go:42: 07:00:32 | watcher-tls-certs-change/1-deploy-with-tlse | tgSvfD60T1dJeiiJa5kBj0Gfj+0gKtYNueYrOo65hlC6Smd2tP/7av+xLtQ2b1JS' - ' logger.go:42: 07:00:32 | watcher-tls-certs-change/1-deploy-with-tlse | tFAWNznSY+2n+8SyIH6jjDb/oBYfANPOIcfU/4X8dTGzLgkpMMXK3KyXfzgD9EW+' - ' logger.go:42: 07:00:32 | watcher-tls-certs-change/1-deploy-with-tlse | HexV+da9g5Ix/I5gxku5bd0DWz/ovW+1o/uYMgB5ss9oqd8YKKSqkptXfAtoGToA' - ' logger.go:42: 07:00:32 | watcher-tls-certs-change/1-deploy-with-tlse | KG8QccenVwR7kEcCAwEAAaOBjzCBjDAOBgNVHQ8BAf8EBAMCBaAwEwYDVR0lBAww' - ' logger.go:42: 07:00:32 | watcher-tls-certs-change/1-deploy-with-tlse | CgYIKwYBBQUHAwEwDAYDVR0TAQH/BAIwADAfBgNVHSMEGDAWgBSy3xzAKuvmrfpX' - ' logger.go:42: 07:00:32 | watcher-tls-certs-change/1-deploy-with-tlse | EgmGJVm/ieLg8TA2BgNVHREBAf8ELDAqgih3YXRjaGVyLXB1YmxpYy53YXRjaGVy' - ' logger.go:42: 07:00:32 | watcher-tls-certs-change/1-deploy-with-tlse | LWt1dHRsLWRlZmF1bHQuc3ZjMA0GCSqGSIb3DQEBDAUAA4IBgQBmitaOVryjB1Fy' - ' logger.go:42: 07:00:32 | watcher-tls-certs-change/1-deploy-with-tlse | o+uQ7q1CfFEXu4uk57VffzLi6GFAot2u9gqfKm5Bx/JFKe0HvUHxfyum8oGhg8Mm' - ' logger.go:42: 07:00:32 | watcher-tls-certs-change/1-deploy-with-tlse | lF7bosLeepyWZeFYbXLZ3eCDrTyQRPPc8j6O8R9Ygg2wvIhwIrADK1/KTQuPVFW5' - ' logger.go:42: 07:00:32 | watcher-tls-certs-change/1-deploy-with-tlse | DVaC7MERErbmczp6DYiELdgcqlJimK3Vp/11qNtk1Kc7wVLI2Y36CzhwVMXOBW00' - ' logger.go:42: 07:00:32 | watcher-tls-certs-change/1-deploy-with-tlse | XXQxvYoWLH8WjhVMN9L0ZfTLdQIwL4gt0B3R7Uh8HR/qMCwW4RhIByG/el6wH1yh' - ' logger.go:42: 07:00:32 | watcher-tls-certs-change/1-deploy-with-tlse | D5gkY2Jkbf1vvmmw6MBLkx5O8aL145sW8fDKMGrmRj+eO5ALLeYMdmQZXqwZ4URw' - ' logger.go:42: 07:00:32 | watcher-tls-certs-change/1-deploy-with-tlse | X1tyjzN1ntrcBJf6xnuMLfT1v9RPMCT4CPeb23M5lqvwQXdrbiarkLQ9n/mBPUA6' - ' logger.go:42: 07:00:32 | watcher-tls-certs-change/1-deploy-with-tlse | lUYWBkFqYg9XnAgkWTeehQ7+AeOtvhj6gu3st6nEkOjJs9ri3Jris8kGL9Ay5yMy' - ' logger.go:42: 07:00:32 | watcher-tls-certs-change/1-deploy-with-tlse | af6ByIVzqJuVxr9GJiFIvhx+t2NEelQUiFnN1465Q67x/AELrfo=' - ' logger.go:42: 07:00:32 | watcher-tls-certs-change/1-deploy-with-tlse | -----END CERTIFICATE-----''' - ' logger.go:42: 07:00:32 | watcher-tls-certs-change/1-deploy-with-tlse | + ''['' ''-----BEGIN CERTIFICATE-----' - ' logger.go:42: 07:00:32 | watcher-tls-certs-change/1-deploy-with-tlse | MIIDsjCCAhqgAwIBAgIQFVNmGwlW4BZq0NMjqP97cTANBgkqhkiG9w0BAQwFADAY' - ' logger.go:42: 07:00:32 | watcher-tls-certs-change/1-deploy-with-tlse | MRYwFAYDVQQDEw1yb290Y2EtcHVibGljMB4XDTI2MDEyMjA3MDAwN1oXDTMxMDEy' - ' logger.go:42: 07:00:32 | watcher-tls-certs-change/1-deploy-with-tlse | MTA3MDAwN1owADCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAMpNXlNH' - ' logger.go:42: 07:00:32 | watcher-tls-certs-change/1-deploy-with-tlse | +PgDkPuiPtY2Iy+iOVTneCp31gtcOAI8JIp8fgCvk7KwdimyxIUQ2cYxsZPnG0cy' - ' logger.go:42: 07:00:32 | watcher-tls-certs-change/1-deploy-with-tlse | wL4XpOQoqp0ElzP3DPP1FUOAnx7ctKdash5DUzBT+84oA2Hab8yd3D9RWVXgYTYP' - ' logger.go:42: 07:00:32 | watcher-tls-certs-change/1-deploy-with-tlse | tgSvfD60T1dJeiiJa5kBj0Gfj+0gKtYNueYrOo65hlC6Smd2tP/7av+xLtQ2b1JS' - ' logger.go:42: 07:00:32 | watcher-tls-certs-change/1-deploy-with-tlse | tFAWNznSY+2n+8SyIH6jjDb/oBYfANPOIcfU/4X8dTGzLgkpMMXK3KyXfzgD9EW+' - ' logger.go:42: 07:00:32 | watcher-tls-certs-change/1-deploy-with-tlse | HexV+da9g5Ix/I5gxku5bd0DWz/ovW+1o/uYMgB5ss9oqd8YKKSqkptXfAtoGToA' - ' logger.go:42: 07:00:32 | watcher-tls-certs-change/1-deploy-with-tlse | KG8QccenVwR7kEcCAwEAAaOBjzCBjDAOBgNVHQ8BAf8EBAMCBaAwEwYDVR0lBAww' - ' logger.go:42: 07:00:32 | watcher-tls-certs-change/1-deploy-with-tlse | CgYIKwYBBQUHAwEwDAYDVR0TAQH/BAIwADAfBgNVHSMEGDAWgBSy3xzAKuvmrfpX' - ' logger.go:42: 07:00:32 | watcher-tls-certs-change/1-deploy-with-tlse | EgmGJVm/ieLg8TA2BgNVHREBAf8ELDAqgih3YXRjaGVyLXB1YmxpYy53YXRjaGVy' - ' logger.go:42: 07:00:32 | watcher-tls-certs-change/1-deploy-with-tlse | LWt1dHRsLWRlZmF1bHQuc3ZjMA0GCSqGSIb3DQEBDAUAA4IBgQBmitaOVryjB1Fy' - ' logger.go:42: 07:00:32 | watcher-tls-certs-change/1-deploy-with-tlse | o+uQ7q1CfFEXu4uk57VffzLi6GFAot2u9gqfKm5Bx/JFKe0HvUHxfyum8oGhg8Mm' - ' logger.go:42: 07:00:32 | watcher-tls-certs-change/1-deploy-with-tlse | lF7bosLeepyWZeFYbXLZ3eCDrTyQRPPc8j6O8R9Ygg2wvIhwIrADK1/KTQuPVFW5' - ' logger.go:42: 07:00:32 | watcher-tls-certs-change/1-deploy-with-tlse | DVaC7MERErbmczp6DYiELdgcqlJimK3Vp/11qNtk1Kc7wVLI2Y36CzhwVMXOBW00' - ' logger.go:42: 07:00:32 | watcher-tls-certs-change/1-deploy-with-tlse | XXQxvYoWLH8WjhVMN9L0ZfTLdQIwL4gt0B3R7Uh8HR/qMCwW4RhIByG/el6wH1yh' - ' logger.go:42: 07:00:32 | watcher-tls-certs-change/1-deploy-with-tlse | D5gkY2Jkbf1vvmmw6MBLkx5O8aL145sW8fDKMGrmRj+eO5ALLeYMdmQZXqwZ4URw' - ' logger.go:42: 07:00:32 | watcher-tls-certs-change/1-deploy-with-tlse | X1tyjzN1ntrcBJf6xnuMLfT1v9RPMCT4CPeb23M5lqvwQXdrbiarkLQ9n/mBPUA6' - ' logger.go:42: 07:00:32 | watcher-tls-certs-change/1-deploy-with-tlse | lUYWBkFqYg9XnAgkWTeehQ7+AeOtvhj6gu3st6nEkOjJs9ri3Jris8kGL9Ay5yMy' - ' logger.go:42: 07:00:32 | watcher-tls-certs-change/1-deploy-with-tlse | af6ByIVzqJuVxr9GJiFIvhx+t2NEelQUiFnN1465Q67x/AELrfo=' - ' logger.go:42: 07:00:32 | watcher-tls-certs-change/1-deploy-with-tlse | -----END CERTIFICATE-----'' ''!='' ''-----BEGIN CERTIFICATE-----' - ' logger.go:42: 07:00:32 | watcher-tls-certs-change/1-deploy-with-tlse | MIIDsjCCAhqgAwIBAgIQFVNmGwlW4BZq0NMjqP97cTANBgkqhkiG9w0BAQwFADAY' - ' logger.go:42: 07:00:32 | watcher-tls-certs-change/1-deploy-with-tlse | MRYwFAYDVQQDEw1yb290Y2EtcHVibGljMB4XDTI2MDEyMjA3MDAwN1oXDTMxMDEy' - ' logger.go:42: 07:00:32 | watcher-tls-certs-change/1-deploy-with-tlse | MTA3MDAwN1owADCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAMpNXlNH' - ' logger.go:42: 07:00:32 | watcher-tls-certs-change/1-deploy-with-tlse | +PgDkPuiPtY2Iy+iOVTneCp31gtcOAI8JIp8fgCvk7KwdimyxIUQ2cYxsZPnG0cy' - ' logger.go:42: 07:00:32 | watcher-tls-certs-change/1-deploy-with-tlse | wL4XpOQoqp0ElzP3DPP1FUOAnx7ctKdash5DUzBT+84oA2Hab8yd3D9RWVXgYTYP' - ' logger.go:42: 07:00:32 | watcher-tls-certs-change/1-deploy-with-tlse | tgSvfD60T1dJeiiJa5kBj0Gfj+0gKtYNueYrOo65hlC6Smd2tP/7av+xLtQ2b1JS' - ' logger.go:42: 07:00:32 | watcher-tls-certs-change/1-deploy-with-tlse | tFAWNznSY+2n+8SyIH6jjDb/oBYfANPOIcfU/4X8dTGzLgkpMMXK3KyXfzgD9EW+' - ' logger.go:42: 07:00:32 | watcher-tls-certs-change/1-deploy-with-tlse | HexV+da9g5Ix/I5gxku5bd0DWz/ovW+1o/uYMgB5ss9oqd8YKKSqkptXfAtoGToA' - ' logger.go:42: 07:00:32 | watcher-tls-certs-change/1-deploy-with-tlse | KG8QccenVwR7kEcCAwEAAaOBjzCBjDAOBgNVHQ8BAf8EBAMCBaAwEwYDVR0lBAww' - ' logger.go:42: 07:00:32 | watcher-tls-certs-change/1-deploy-with-tlse | CgYIKwYBBQUHAwEwDAYDVR0TAQH/BAIwADAfBgNVHSMEGDAWgBSy3xzAKuvmrfpX' - ' logger.go:42: 07:00:32 | watcher-tls-certs-change/1-deploy-with-tlse | EgmGJVm/ieLg8TA2BgNVHREBAf8ELDAqgih3YXRjaGVyLXB1YmxpYy53YXRjaGVy' - ' logger.go:42: 07:00:32 | watcher-tls-certs-change/1-deploy-with-tlse | LWt1dHRsLWRlZmF1bHQuc3ZjMA0GCSqGSIb3DQEBDAUAA4IBgQBmitaOVryjB1Fy' - ' logger.go:42: 07:00:32 | watcher-tls-certs-change/1-deploy-with-tlse | o+uQ7q1CfFEXu4uk57VffzLi6GFAot2u9gqfKm5Bx/JFKe0HvUHxfyum8oGhg8Mm' - ' logger.go:42: 07:00:32 | watcher-tls-certs-change/1-deploy-with-tlse | lF7bosLeepyWZeFYbXLZ3eCDrTyQRPPc8j6O8R9Ygg2wvIhwIrADK1/KTQuPVFW5' - ' logger.go:42: 07:00:32 | watcher-tls-certs-change/1-deploy-with-tlse | DVaC7MERErbmczp6DYiELdgcqlJimK3Vp/11qNtk1Kc7wVLI2Y36CzhwVMXOBW00' - ' logger.go:42: 07:00:32 | watcher-tls-certs-change/1-deploy-with-tlse | XXQxvYoWLH8WjhVMN9L0ZfTLdQIwL4gt0B3R7Uh8HR/qMCwW4RhIByG/el6wH1yh' - ' logger.go:42: 07:00:32 | watcher-tls-certs-change/1-deploy-with-tlse | D5gkY2Jkbf1vvmmw6MBLkx5O8aL145sW8fDKMGrmRj+eO5ALLeYMdmQZXqwZ4URw' - ' logger.go:42: 07:00:32 | watcher-tls-certs-change/1-deploy-with-tlse | X1tyjzN1ntrcBJf6xnuMLfT1v9RPMCT4CPeb23M5lqvwQXdrbiarkLQ9n/mBPUA6' - ' logger.go:42: 07:00:32 | watcher-tls-certs-change/1-deploy-with-tlse | lUYWBkFqYg9XnAgkWTeehQ7+AeOtvhj6gu3st6nEkOjJs9ri3Jris8kGL9Ay5yMy' - ' logger.go:42: 07:00:32 | watcher-tls-certs-change/1-deploy-with-tlse | af6ByIVzqJuVxr9GJiFIvhx+t2NEelQUiFnN1465Q67x/AELrfo=' - ' logger.go:42: 07:00:32 | watcher-tls-certs-change/1-deploy-with-tlse | -----END CERTIFICATE-----'' '']''' - ' logger.go:42: 07:00:32 | watcher-tls-certs-change/1-deploy-with-tlse | ++ oc rsh -n watcher-kuttl-default -c watcher-api watcher-kuttl-api-0 cat /etc/pki/tls/certs/internal.crt' - ' logger.go:42: 07:00:33 | watcher-tls-certs-change/1-deploy-with-tlse | + internal_svc_cert=''-----BEGIN CERTIFICATE-----' - ' logger.go:42: 07:00:33 | watcher-tls-certs-change/1-deploy-with-tlse | MIIDtzCCAh+gAwIBAgIRAMdtA66/UVErgQ6jB2Mo7g8wDQYJKoZIhvcNAQEMBQAw' - ' logger.go:42: 07:00:33 | watcher-tls-certs-change/1-deploy-with-tlse | GjEYMBYGA1UEAxMPcm9vdGNhLWludGVybmFsMB4XDTI2MDEyMjA3MDAwN1oXDTMx' - ' logger.go:42: 07:00:33 | watcher-tls-certs-change/1-deploy-with-tlse | MDEyMTA3MDAwN1owADCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAOzY' - ' logger.go:42: 07:00:33 | watcher-tls-certs-change/1-deploy-with-tlse | hUVeRLlVB60sJbdpaQAJI2vpbeEEDa91MsWoYlWF58nvtb17k8KGvpPNemDF4UvD' - ' logger.go:42: 07:00:33 | watcher-tls-certs-change/1-deploy-with-tlse | kRm+oSlbUhWwlbwI1Q3VFz4JhvU12PvdUhV4e5PbNGULUDGpAbr04w098HEQSUj+' - ' logger.go:42: 07:00:33 | watcher-tls-certs-change/1-deploy-with-tlse | D/44rQFA0PbvLa//wISJcF6NG2kZiiAFAlrfYJH5tJM2nbdDOvHRY2FScNwLw3aE' - ' logger.go:42: 07:00:33 | watcher-tls-certs-change/1-deploy-with-tlse | 1UGIp9tmMA1on0l+9tBcndNzSq8klq5TdwDo8Pwrqz8m37ZXvwD1gizzDXwZhiPN' - ' logger.go:42: 07:00:33 | watcher-tls-certs-change/1-deploy-with-tlse | KIvLKp0hAxvswvj5Rib80zGQX6JjJEcOBFZ4GT/e8WUjrsJ6JsTygAtF9Y8xwchO' - ' logger.go:42: 07:00:33 | watcher-tls-certs-change/1-deploy-with-tlse | Ce4xdKeeGZ8hDK2ojFcCAwEAAaOBkTCBjjAOBgNVHQ8BAf8EBAMCBaAwEwYDVR0l' - ' logger.go:42: 07:00:33 | watcher-tls-certs-change/1-deploy-with-tlse | BAwwCgYIKwYBBQUHAwEwDAYDVR0TAQH/BAIwADAfBgNVHSMEGDAWgBSEvgxWYqXo' - ' logger.go:42: 07:00:33 | watcher-tls-certs-change/1-deploy-with-tlse | VaasRUQLJzZAsO/HJTA4BgNVHREBAf8ELjAsgip3YXRjaGVyLWludGVybmFsLndh' - ' logger.go:42: 07:00:33 | watcher-tls-certs-change/1-deploy-with-tlse | dGNoZXIta3V0dGwtZGVmYXVsdC5zdmMwDQYJKoZIhvcNAQEMBQADggGBAJzAl8kI' - ' logger.go:42: 07:00:33 | watcher-tls-certs-change/1-deploy-with-tlse | fRCmnFBnPIq9+3XfefG0pybCCtUusBuoATJYDHmH/379Rb4zeN6bqXg7PP36qVao' - ' logger.go:42: 07:00:33 | watcher-tls-certs-change/1-deploy-with-tlse | 98lAd5LFEieOBPJq2+QXo8CWrLcbItzbOHdFeVStfpuQqaD2OMYLLepP/IpAKT+0' - ' logger.go:42: 07:00:33 | watcher-tls-certs-change/1-deploy-with-tlse | +GEHqjJsaHF3xj6g5HcvQzhSaQh3Pg8ffKmqJKQg95PF1BPQ110MawKYh9MBibPr' - ' logger.go:42: 07:00:33 | watcher-tls-certs-change/1-deploy-with-tlse | b0myrALIel2CeIeWqm+VMrMI8en/gFxJWTfg58GcQbRiUDzWa2BHE1mEoXmOLNDt' - ' logger.go:42: 07:00:33 | watcher-tls-certs-change/1-deploy-with-tlse | L1iYWnFugdfAycs2+uAkPMbpav3Oi3WJX1N1iKRklNv5Cq8oQ++b8aoR2RZPnJJE' - ' logger.go:42: 07:00:33 | watcher-tls-certs-change/1-deploy-with-tlse | ppmkEBU4Kt94NO5B/A2PnjkM8+yOQjhjVnyD5Ccp8kHx0MN7H1OW9J0m4JEPjP/c' - ' logger.go:42: 07:00:33 | watcher-tls-certs-change/1-deploy-with-tlse | uRTxWb4yoJl6Igdfy3tYgldteKztKgf4tDlkR/Htsf8GVL+zbsnTseYY5BakyygP' - ' logger.go:42: 07:00:33 | watcher-tls-certs-change/1-deploy-with-tlse | AHnxS0D1QCA5Zv5craVd+a2ZCNvbnSdl4pwShL796A0cNZGnAKDD1LwcYQ==' - ' logger.go:42: 07:00:33 | watcher-tls-certs-change/1-deploy-with-tlse | -----END CERTIFICATE-----''' - ' logger.go:42: 07:00:33 | watcher-tls-certs-change/1-deploy-with-tlse | ++ oc get -n watcher-kuttl-default secret cert-watcher-internal-svc -o ''jsonpath={.data.tls\.crt}''' - ' logger.go:42: 07:00:33 | watcher-tls-certs-change/1-deploy-with-tlse | ++ base64 --decode' - ' logger.go:42: 07:00:33 | watcher-tls-certs-change/1-deploy-with-tlse | + internal_secret_cert=''-----BEGIN CERTIFICATE-----' - ' logger.go:42: 07:00:33 | watcher-tls-certs-change/1-deploy-with-tlse | MIIDtzCCAh+gAwIBAgIRAMdtA66/UVErgQ6jB2Mo7g8wDQYJKoZIhvcNAQEMBQAw' - ' logger.go:42: 07:00:33 | watcher-tls-certs-change/1-deploy-with-tlse | GjEYMBYGA1UEAxMPcm9vdGNhLWludGVybmFsMB4XDTI2MDEyMjA3MDAwN1oXDTMx' - ' logger.go:42: 07:00:33 | watcher-tls-certs-change/1-deploy-with-tlse | MDEyMTA3MDAwN1owADCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAOzY' - ' logger.go:42: 07:00:33 | watcher-tls-certs-change/1-deploy-with-tlse | hUVeRLlVB60sJbdpaQAJI2vpbeEEDa91MsWoYlWF58nvtb17k8KGvpPNemDF4UvD' - ' logger.go:42: 07:00:33 | watcher-tls-certs-change/1-deploy-with-tlse | kRm+oSlbUhWwlbwI1Q3VFz4JhvU12PvdUhV4e5PbNGULUDGpAbr04w098HEQSUj+' - ' logger.go:42: 07:00:33 | watcher-tls-certs-change/1-deploy-with-tlse | D/44rQFA0PbvLa//wISJcF6NG2kZiiAFAlrfYJH5tJM2nbdDOvHRY2FScNwLw3aE' - ' logger.go:42: 07:00:33 | watcher-tls-certs-change/1-deploy-with-tlse | 1UGIp9tmMA1on0l+9tBcndNzSq8klq5TdwDo8Pwrqz8m37ZXvwD1gizzDXwZhiPN' - ' logger.go:42: 07:00:33 | watcher-tls-certs-change/1-deploy-with-tlse | KIvLKp0hAxvswvj5Rib80zGQX6JjJEcOBFZ4GT/e8WUjrsJ6JsTygAtF9Y8xwchO' - ' logger.go:42: 07:00:33 | watcher-tls-certs-change/1-deploy-with-tlse | Ce4xdKeeGZ8hDK2ojFcCAwEAAaOBkTCBjjAOBgNVHQ8BAf8EBAMCBaAwEwYDVR0l' - ' logger.go:42: 07:00:33 | watcher-tls-certs-change/1-deploy-with-tlse | BAwwCgYIKwYBBQUHAwEwDAYDVR0TAQH/BAIwADAfBgNVHSMEGDAWgBSEvgxWYqXo' - ' logger.go:42: 07:00:33 | watcher-tls-certs-change/1-deploy-with-tlse | VaasRUQLJzZAsO/HJTA4BgNVHREBAf8ELjAsgip3YXRjaGVyLWludGVybmFsLndh' - ' logger.go:42: 07:00:33 | watcher-tls-certs-change/1-deploy-with-tlse | dGNoZXIta3V0dGwtZGVmYXVsdC5zdmMwDQYJKoZIhvcNAQEMBQADggGBAJzAl8kI' - ' logger.go:42: 07:00:33 | watcher-tls-certs-change/1-deploy-with-tlse | fRCmnFBnPIq9+3XfefG0pybCCtUusBuoATJYDHmH/379Rb4zeN6bqXg7PP36qVao' - ' logger.go:42: 07:00:33 | watcher-tls-certs-change/1-deploy-with-tlse | 98lAd5LFEieOBPJq2+QXo8CWrLcbItzbOHdFeVStfpuQqaD2OMYLLepP/IpAKT+0' - ' logger.go:42: 07:00:33 | watcher-tls-certs-change/1-deploy-with-tlse | +GEHqjJsaHF3xj6g5HcvQzhSaQh3Pg8ffKmqJKQg95PF1BPQ110MawKYh9MBibPr' - ' logger.go:42: 07:00:33 | watcher-tls-certs-change/1-deploy-with-tlse | b0myrALIel2CeIeWqm+VMrMI8en/gFxJWTfg58GcQbRiUDzWa2BHE1mEoXmOLNDt' - ' logger.go:42: 07:00:33 | watcher-tls-certs-change/1-deploy-with-tlse | L1iYWnFugdfAycs2+uAkPMbpav3Oi3WJX1N1iKRklNv5Cq8oQ++b8aoR2RZPnJJE' - ' logger.go:42: 07:00:33 | watcher-tls-certs-change/1-deploy-with-tlse | ppmkEBU4Kt94NO5B/A2PnjkM8+yOQjhjVnyD5Ccp8kHx0MN7H1OW9J0m4JEPjP/c' - ' logger.go:42: 07:00:33 | watcher-tls-certs-change/1-deploy-with-tlse | uRTxWb4yoJl6Igdfy3tYgldteKztKgf4tDlkR/Htsf8GVL+zbsnTseYY5BakyygP' - ' logger.go:42: 07:00:33 | watcher-tls-certs-change/1-deploy-with-tlse | AHnxS0D1QCA5Zv5craVd+a2ZCNvbnSdl4pwShL796A0cNZGnAKDD1LwcYQ==' - ' logger.go:42: 07:00:33 | watcher-tls-certs-change/1-deploy-with-tlse | -----END CERTIFICATE-----''' - ' logger.go:42: 07:00:33 | watcher-tls-certs-change/1-deploy-with-tlse | + ''['' ''-----BEGIN CERTIFICATE-----' - ' logger.go:42: 07:00:33 | watcher-tls-certs-change/1-deploy-with-tlse | MIIDtzCCAh+gAwIBAgIRAMdtA66/UVErgQ6jB2Mo7g8wDQYJKoZIhvcNAQEMBQAw' - ' logger.go:42: 07:00:33 | watcher-tls-certs-change/1-deploy-with-tlse | GjEYMBYGA1UEAxMPcm9vdGNhLWludGVybmFsMB4XDTI2MDEyMjA3MDAwN1oXDTMx' - ' logger.go:42: 07:00:33 | watcher-tls-certs-change/1-deploy-with-tlse | MDEyMTA3MDAwN1owADCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAOzY' - ' logger.go:42: 07:00:33 | watcher-tls-certs-change/1-deploy-with-tlse | hUVeRLlVB60sJbdpaQAJI2vpbeEEDa91MsWoYlWF58nvtb17k8KGvpPNemDF4UvD' - ' logger.go:42: 07:00:33 | watcher-tls-certs-change/1-deploy-with-tlse | kRm+oSlbUhWwlbwI1Q3VFz4JhvU12PvdUhV4e5PbNGULUDGpAbr04w098HEQSUj+' - ' logger.go:42: 07:00:33 | watcher-tls-certs-change/1-deploy-with-tlse | D/44rQFA0PbvLa//wISJcF6NG2kZiiAFAlrfYJH5tJM2nbdDOvHRY2FScNwLw3aE' - ' logger.go:42: 07:00:33 | watcher-tls-certs-change/1-deploy-with-tlse | 1UGIp9tmMA1on0l+9tBcndNzSq8klq5TdwDo8Pwrqz8m37ZXvwD1gizzDXwZhiPN' - ' logger.go:42: 07:00:33 | watcher-tls-certs-change/1-deploy-with-tlse | KIvLKp0hAxvswvj5Rib80zGQX6JjJEcOBFZ4GT/e8WUjrsJ6JsTygAtF9Y8xwchO' - ' logger.go:42: 07:00:33 | watcher-tls-certs-change/1-deploy-with-tlse | Ce4xdKeeGZ8hDK2ojFcCAwEAAaOBkTCBjjAOBgNVHQ8BAf8EBAMCBaAwEwYDVR0l' - ' logger.go:42: 07:00:33 | watcher-tls-certs-change/1-deploy-with-tlse | BAwwCgYIKwYBBQUHAwEwDAYDVR0TAQH/BAIwADAfBgNVHSMEGDAWgBSEvgxWYqXo' - ' logger.go:42: 07:00:33 | watcher-tls-certs-change/1-deploy-with-tlse | VaasRUQLJzZAsO/HJTA4BgNVHREBAf8ELjAsgip3YXRjaGVyLWludGVybmFsLndh' - ' logger.go:42: 07:00:33 | watcher-tls-certs-change/1-deploy-with-tlse | dGNoZXIta3V0dGwtZGVmYXVsdC5zdmMwDQYJKoZIhvcNAQEMBQADggGBAJzAl8kI' - ' logger.go:42: 07:00:33 | watcher-tls-certs-change/1-deploy-with-tlse | fRCmnFBnPIq9+3XfefG0pybCCtUusBuoATJYDHmH/379Rb4zeN6bqXg7PP36qVao' - ' logger.go:42: 07:00:33 | watcher-tls-certs-change/1-deploy-with-tlse | 98lAd5LFEieOBPJq2+QXo8CWrLcbItzbOHdFeVStfpuQqaD2OMYLLepP/IpAKT+0' - ' logger.go:42: 07:00:33 | watcher-tls-certs-change/1-deploy-with-tlse | +GEHqjJsaHF3xj6g5HcvQzhSaQh3Pg8ffKmqJKQg95PF1BPQ110MawKYh9MBibPr' - ' logger.go:42: 07:00:33 | watcher-tls-certs-change/1-deploy-with-tlse | b0myrALIel2CeIeWqm+VMrMI8en/gFxJWTfg58GcQbRiUDzWa2BHE1mEoXmOLNDt' - ' logger.go:42: 07:00:33 | watcher-tls-certs-change/1-deploy-with-tlse | L1iYWnFugdfAycs2+uAkPMbpav3Oi3WJX1N1iKRklNv5Cq8oQ++b8aoR2RZPnJJE' - ' logger.go:42: 07:00:33 | watcher-tls-certs-change/1-deploy-with-tlse | ppmkEBU4Kt94NO5B/A2PnjkM8+yOQjhjVnyD5Ccp8kHx0MN7H1OW9J0m4JEPjP/c' - ' logger.go:42: 07:00:33 | watcher-tls-certs-change/1-deploy-with-tlse | uRTxWb4yoJl6Igdfy3tYgldteKztKgf4tDlkR/Htsf8GVL+zbsnTseYY5BakyygP' - ' logger.go:42: 07:00:33 | watcher-tls-certs-change/1-deploy-with-tlse | AHnxS0D1QCA5Zv5craVd+a2ZCNvbnSdl4pwShL796A0cNZGnAKDD1LwcYQ==' - ' logger.go:42: 07:00:33 | watcher-tls-certs-change/1-deploy-with-tlse | -----END CERTIFICATE-----'' ''!='' ''-----BEGIN CERTIFICATE-----' - ' logger.go:42: 07:00:33 | watcher-tls-certs-change/1-deploy-with-tlse | MIIDtzCCAh+gAwIBAgIRAMdtA66/UVErgQ6jB2Mo7g8wDQYJKoZIhvcNAQEMBQAw' - ' logger.go:42: 07:00:33 | watcher-tls-certs-change/1-deploy-with-tlse | GjEYMBYGA1UEAxMPcm9vdGNhLWludGVybmFsMB4XDTI2MDEyMjA3MDAwN1oXDTMx' - ' logger.go:42: 07:00:33 | watcher-tls-certs-change/1-deploy-with-tlse | MDEyMTA3MDAwN1owADCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAOzY' - ' logger.go:42: 07:00:33 | watcher-tls-certs-change/1-deploy-with-tlse | hUVeRLlVB60sJbdpaQAJI2vpbeEEDa91MsWoYlWF58nvtb17k8KGvpPNemDF4UvD' - ' logger.go:42: 07:00:33 | watcher-tls-certs-change/1-deploy-with-tlse | kRm+oSlbUhWwlbwI1Q3VFz4JhvU12PvdUhV4e5PbNGULUDGpAbr04w098HEQSUj+' - ' logger.go:42: 07:00:33 | watcher-tls-certs-change/1-deploy-with-tlse | D/44rQFA0PbvLa//wISJcF6NG2kZiiAFAlrfYJH5tJM2nbdDOvHRY2FScNwLw3aE' - ' logger.go:42: 07:00:33 | watcher-tls-certs-change/1-deploy-with-tlse | 1UGIp9tmMA1on0l+9tBcndNzSq8klq5TdwDo8Pwrqz8m37ZXvwD1gizzDXwZhiPN' - ' logger.go:42: 07:00:33 | watcher-tls-certs-change/1-deploy-with-tlse | KIvLKp0hAxvswvj5Rib80zGQX6JjJEcOBFZ4GT/e8WUjrsJ6JsTygAtF9Y8xwchO' - ' logger.go:42: 07:00:33 | watcher-tls-certs-change/1-deploy-with-tlse | Ce4xdKeeGZ8hDK2ojFcCAwEAAaOBkTCBjjAOBgNVHQ8BAf8EBAMCBaAwEwYDVR0l' - ' logger.go:42: 07:00:33 | watcher-tls-certs-change/1-deploy-with-tlse | BAwwCgYIKwYBBQUHAwEwDAYDVR0TAQH/BAIwADAfBgNVHSMEGDAWgBSEvgxWYqXo' - ' logger.go:42: 07:00:33 | watcher-tls-certs-change/1-deploy-with-tlse | VaasRUQLJzZAsO/HJTA4BgNVHREBAf8ELjAsgip3YXRjaGVyLWludGVybmFsLndh' - ' logger.go:42: 07:00:33 | watcher-tls-certs-change/1-deploy-with-tlse | dGNoZXIta3V0dGwtZGVmYXVsdC5zdmMwDQYJKoZIhvcNAQEMBQADggGBAJzAl8kI' - ' logger.go:42: 07:00:33 | watcher-tls-certs-change/1-deploy-with-tlse | fRCmnFBnPIq9+3XfefG0pybCCtUusBuoATJYDHmH/379Rb4zeN6bqXg7PP36qVao' - ' logger.go:42: 07:00:33 | watcher-tls-certs-change/1-deploy-with-tlse | 98lAd5LFEieOBPJq2+QXo8CWrLcbItzbOHdFeVStfpuQqaD2OMYLLepP/IpAKT+0' - ' logger.go:42: 07:00:33 | watcher-tls-certs-change/1-deploy-with-tlse | +GEHqjJsaHF3xj6g5HcvQzhSaQh3Pg8ffKmqJKQg95PF1BPQ110MawKYh9MBibPr' - ' logger.go:42: 07:00:33 | watcher-tls-certs-change/1-deploy-with-tlse | b0myrALIel2CeIeWqm+VMrMI8en/gFxJWTfg58GcQbRiUDzWa2BHE1mEoXmOLNDt' - ' logger.go:42: 07:00:33 | watcher-tls-certs-change/1-deploy-with-tlse | L1iYWnFugdfAycs2+uAkPMbpav3Oi3WJX1N1iKRklNv5Cq8oQ++b8aoR2RZPnJJE' - ' logger.go:42: 07:00:33 | watcher-tls-certs-change/1-deploy-with-tlse | ppmkEBU4Kt94NO5B/A2PnjkM8+yOQjhjVnyD5Ccp8kHx0MN7H1OW9J0m4JEPjP/c' - ' logger.go:42: 07:00:33 | watcher-tls-certs-change/1-deploy-with-tlse | uRTxWb4yoJl6Igdfy3tYgldteKztKgf4tDlkR/Htsf8GVL+zbsnTseYY5BakyygP' - ' logger.go:42: 07:00:33 | watcher-tls-certs-change/1-deploy-with-tlse | AHnxS0D1QCA5Zv5craVd+a2ZCNvbnSdl4pwShL796A0cNZGnAKDD1LwcYQ==' - ' logger.go:42: 07:00:33 | watcher-tls-certs-change/1-deploy-with-tlse | -----END CERTIFICATE-----'' '']''' - ' logger.go:42: 07:00:34 | watcher-tls-certs-change/1-deploy-with-tlse | running command: [sh -c set -euxo pipefail' - ' public_svc_cert=$(oc rsh -n $NAMESPACE -c watcher-api watcher-kuttl-api-0 cat /etc/pki/tls/certs/public.crt)' - ' public_secret_cert=$(oc get -n $NAMESPACE secret cert-watcher-public-svc -o jsonpath=''{.data.tls\.crt}'' | base64 --decode)' - ' # ensure that the svc secret and cert secret match' - ' if [ "${public_svc_cert}" != "${public_secret_cert}" ]; then' - ' exit 1' - ' fi' - ' ' - ' internal_svc_cert=$(oc rsh -n $NAMESPACE -c watcher-api watcher-kuttl-api-0 cat /etc/pki/tls/certs/internal.crt)' - ' internal_secret_cert=$(oc get -n $NAMESPACE secret cert-watcher-internal-svc -o jsonpath=''{.data.tls\.crt}'' | base64 --decode)' - ' # ensure that the svc secret and cert secret match' - ' if [ "${internal_svc_cert}" != "${internal_secret_cert}" ]; then' - ' exit 1' - ' fi' - ' ]' - ' logger.go:42: 07:00:34 | watcher-tls-certs-change/1-deploy-with-tlse | ++ oc rsh -n watcher-kuttl-default -c watcher-api watcher-kuttl-api-0 cat /etc/pki/tls/certs/public.crt' - ' logger.go:42: 07:00:34 | watcher-tls-certs-change/1-deploy-with-tlse | + public_svc_cert=''-----BEGIN CERTIFICATE-----' - ' logger.go:42: 07:00:34 | watcher-tls-certs-change/1-deploy-with-tlse | MIIDsjCCAhqgAwIBAgIQFVNmGwlW4BZq0NMjqP97cTANBgkqhkiG9w0BAQwFADAY' - ' logger.go:42: 07:00:34 | watcher-tls-certs-change/1-deploy-with-tlse | MRYwFAYDVQQDEw1yb290Y2EtcHVibGljMB4XDTI2MDEyMjA3MDAwN1oXDTMxMDEy' - ' logger.go:42: 07:00:34 | watcher-tls-certs-change/1-deploy-with-tlse | MTA3MDAwN1owADCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAMpNXlNH' - ' logger.go:42: 07:00:34 | watcher-tls-certs-change/1-deploy-with-tlse | +PgDkPuiPtY2Iy+iOVTneCp31gtcOAI8JIp8fgCvk7KwdimyxIUQ2cYxsZPnG0cy' - ' logger.go:42: 07:00:34 | watcher-tls-certs-change/1-deploy-with-tlse | wL4XpOQoqp0ElzP3DPP1FUOAnx7ctKdash5DUzBT+84oA2Hab8yd3D9RWVXgYTYP' - ' logger.go:42: 07:00:34 | watcher-tls-certs-change/1-deploy-with-tlse | tgSvfD60T1dJeiiJa5kBj0Gfj+0gKtYNueYrOo65hlC6Smd2tP/7av+xLtQ2b1JS' - ' logger.go:42: 07:00:34 | watcher-tls-certs-change/1-deploy-with-tlse | tFAWNznSY+2n+8SyIH6jjDb/oBYfANPOIcfU/4X8dTGzLgkpMMXK3KyXfzgD9EW+' - ' logger.go:42: 07:00:34 | watcher-tls-certs-change/1-deploy-with-tlse | HexV+da9g5Ix/I5gxku5bd0DWz/ovW+1o/uYMgB5ss9oqd8YKKSqkptXfAtoGToA' - ' logger.go:42: 07:00:34 | watcher-tls-certs-change/1-deploy-with-tlse | KG8QccenVwR7kEcCAwEAAaOBjzCBjDAOBgNVHQ8BAf8EBAMCBaAwEwYDVR0lBAww' - ' logger.go:42: 07:00:34 | watcher-tls-certs-change/1-deploy-with-tlse | CgYIKwYBBQUHAwEwDAYDVR0TAQH/BAIwADAfBgNVHSMEGDAWgBSy3xzAKuvmrfpX' - ' logger.go:42: 07:00:34 | watcher-tls-certs-change/1-deploy-with-tlse | EgmGJVm/ieLg8TA2BgNVHREBAf8ELDAqgih3YXRjaGVyLXB1YmxpYy53YXRjaGVy' - ' logger.go:42: 07:00:34 | watcher-tls-certs-change/1-deploy-with-tlse | LWt1dHRsLWRlZmF1bHQuc3ZjMA0GCSqGSIb3DQEBDAUAA4IBgQBmitaOVryjB1Fy' - ' logger.go:42: 07:00:34 | watcher-tls-certs-change/1-deploy-with-tlse | o+uQ7q1CfFEXu4uk57VffzLi6GFAot2u9gqfKm5Bx/JFKe0HvUHxfyum8oGhg8Mm' - ' logger.go:42: 07:00:34 | watcher-tls-certs-change/1-deploy-with-tlse | lF7bosLeepyWZeFYbXLZ3eCDrTyQRPPc8j6O8R9Ygg2wvIhwIrADK1/KTQuPVFW5' - ' logger.go:42: 07:00:34 | watcher-tls-certs-change/1-deploy-with-tlse | DVaC7MERErbmczp6DYiELdgcqlJimK3Vp/11qNtk1Kc7wVLI2Y36CzhwVMXOBW00' - ' logger.go:42: 07:00:34 | watcher-tls-certs-change/1-deploy-with-tlse | XXQxvYoWLH8WjhVMN9L0ZfTLdQIwL4gt0B3R7Uh8HR/qMCwW4RhIByG/el6wH1yh' - ' logger.go:42: 07:00:34 | watcher-tls-certs-change/1-deploy-with-tlse | D5gkY2Jkbf1vvmmw6MBLkx5O8aL145sW8fDKMGrmRj+eO5ALLeYMdmQZXqwZ4URw' - ' logger.go:42: 07:00:34 | watcher-tls-certs-change/1-deploy-with-tlse | X1tyjzN1ntrcBJf6xnuMLfT1v9RPMCT4CPeb23M5lqvwQXdrbiarkLQ9n/mBPUA6' - ' logger.go:42: 07:00:34 | watcher-tls-certs-change/1-deploy-with-tlse | lUYWBkFqYg9XnAgkWTeehQ7+AeOtvhj6gu3st6nEkOjJs9ri3Jris8kGL9Ay5yMy' - ' logger.go:42: 07:00:34 | watcher-tls-certs-change/1-deploy-with-tlse | af6ByIVzqJuVxr9GJiFIvhx+t2NEelQUiFnN1465Q67x/AELrfo=' - ' logger.go:42: 07:00:34 | watcher-tls-certs-change/1-deploy-with-tlse | -----END CERTIFICATE-----''' - ' logger.go:42: 07:00:34 | watcher-tls-certs-change/1-deploy-with-tlse | ++ oc get -n watcher-kuttl-default secret cert-watcher-public-svc -o ''jsonpath={.data.tls\.crt}''' - ' logger.go:42: 07:00:34 | watcher-tls-certs-change/1-deploy-with-tlse | ++ base64 --decode' - ' logger.go:42: 07:00:34 | watcher-tls-certs-change/1-deploy-with-tlse | + public_secret_cert=''-----BEGIN CERTIFICATE-----' - ' logger.go:42: 07:00:34 | watcher-tls-certs-change/1-deploy-with-tlse | MIIDsjCCAhqgAwIBAgIQFVNmGwlW4BZq0NMjqP97cTANBgkqhkiG9w0BAQwFADAY' - ' logger.go:42: 07:00:34 | watcher-tls-certs-change/1-deploy-with-tlse | MRYwFAYDVQQDEw1yb290Y2EtcHVibGljMB4XDTI2MDEyMjA3MDAwN1oXDTMxMDEy' - ' logger.go:42: 07:00:34 | watcher-tls-certs-change/1-deploy-with-tlse | MTA3MDAwN1owADCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAMpNXlNH' - ' logger.go:42: 07:00:34 | watcher-tls-certs-change/1-deploy-with-tlse | +PgDkPuiPtY2Iy+iOVTneCp31gtcOAI8JIp8fgCvk7KwdimyxIUQ2cYxsZPnG0cy' - ' logger.go:42: 07:00:34 | watcher-tls-certs-change/1-deploy-with-tlse | wL4XpOQoqp0ElzP3DPP1FUOAnx7ctKdash5DUzBT+84oA2Hab8yd3D9RWVXgYTYP' - ' logger.go:42: 07:00:34 | watcher-tls-certs-change/1-deploy-with-tlse | tgSvfD60T1dJeiiJa5kBj0Gfj+0gKtYNueYrOo65hlC6Smd2tP/7av+xLtQ2b1JS' - ' logger.go:42: 07:00:34 | watcher-tls-certs-change/1-deploy-with-tlse | tFAWNznSY+2n+8SyIH6jjDb/oBYfANPOIcfU/4X8dTGzLgkpMMXK3KyXfzgD9EW+' - ' logger.go:42: 07:00:34 | watcher-tls-certs-change/1-deploy-with-tlse | HexV+da9g5Ix/I5gxku5bd0DWz/ovW+1o/uYMgB5ss9oqd8YKKSqkptXfAtoGToA' - ' logger.go:42: 07:00:34 | watcher-tls-certs-change/1-deploy-with-tlse | KG8QccenVwR7kEcCAwEAAaOBjzCBjDAOBgNVHQ8BAf8EBAMCBaAwEwYDVR0lBAww' - ' logger.go:42: 07:00:34 | watcher-tls-certs-change/1-deploy-with-tlse | CgYIKwYBBQUHAwEwDAYDVR0TAQH/BAIwADAfBgNVHSMEGDAWgBSy3xzAKuvmrfpX' - ' logger.go:42: 07:00:34 | watcher-tls-certs-change/1-deploy-with-tlse | EgmGJVm/ieLg8TA2BgNVHREBAf8ELDAqgih3YXRjaGVyLXB1YmxpYy53YXRjaGVy' - ' logger.go:42: 07:00:34 | watcher-tls-certs-change/1-deploy-with-tlse | LWt1dHRsLWRlZmF1bHQuc3ZjMA0GCSqGSIb3DQEBDAUAA4IBgQBmitaOVryjB1Fy' - ' logger.go:42: 07:00:34 | watcher-tls-certs-change/1-deploy-with-tlse | o+uQ7q1CfFEXu4uk57VffzLi6GFAot2u9gqfKm5Bx/JFKe0HvUHxfyum8oGhg8Mm' - ' logger.go:42: 07:00:34 | watcher-tls-certs-change/1-deploy-with-tlse | lF7bosLeepyWZeFYbXLZ3eCDrTyQRPPc8j6O8R9Ygg2wvIhwIrADK1/KTQuPVFW5' - ' logger.go:42: 07:00:34 | watcher-tls-certs-change/1-deploy-with-tlse | DVaC7MERErbmczp6DYiELdgcqlJimK3Vp/11qNtk1Kc7wVLI2Y36CzhwVMXOBW00' - ' logger.go:42: 07:00:34 | watcher-tls-certs-change/1-deploy-with-tlse | XXQxvYoWLH8WjhVMN9L0ZfTLdQIwL4gt0B3R7Uh8HR/qMCwW4RhIByG/el6wH1yh' - ' logger.go:42: 07:00:34 | watcher-tls-certs-change/1-deploy-with-tlse | D5gkY2Jkbf1vvmmw6MBLkx5O8aL145sW8fDKMGrmRj+eO5ALLeYMdmQZXqwZ4URw' - ' logger.go:42: 07:00:34 | watcher-tls-certs-change/1-deploy-with-tlse | X1tyjzN1ntrcBJf6xnuMLfT1v9RPMCT4CPeb23M5lqvwQXdrbiarkLQ9n/mBPUA6' - ' logger.go:42: 07:00:34 | watcher-tls-certs-change/1-deploy-with-tlse | lUYWBkFqYg9XnAgkWTeehQ7+AeOtvhj6gu3st6nEkOjJs9ri3Jris8kGL9Ay5yMy' - ' logger.go:42: 07:00:34 | watcher-tls-certs-change/1-deploy-with-tlse | af6ByIVzqJuVxr9GJiFIvhx+t2NEelQUiFnN1465Q67x/AELrfo=' - ' logger.go:42: 07:00:34 | watcher-tls-certs-change/1-deploy-with-tlse | -----END CERTIFICATE-----''' - ' logger.go:42: 07:00:34 | watcher-tls-certs-change/1-deploy-with-tlse | + ''['' ''-----BEGIN CERTIFICATE-----' - ' logger.go:42: 07:00:34 | watcher-tls-certs-change/1-deploy-with-tlse | MIIDsjCCAhqgAwIBAgIQFVNmGwlW4BZq0NMjqP97cTANBgkqhkiG9w0BAQwFADAY' - ' logger.go:42: 07:00:34 | watcher-tls-certs-change/1-deploy-with-tlse | MRYwFAYDVQQDEw1yb290Y2EtcHVibGljMB4XDTI2MDEyMjA3MDAwN1oXDTMxMDEy' - ' logger.go:42: 07:00:34 | watcher-tls-certs-change/1-deploy-with-tlse | MTA3MDAwN1owADCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAMpNXlNH' - ' logger.go:42: 07:00:34 | watcher-tls-certs-change/1-deploy-with-tlse | +PgDkPuiPtY2Iy+iOVTneCp31gtcOAI8JIp8fgCvk7KwdimyxIUQ2cYxsZPnG0cy' - ' logger.go:42: 07:00:34 | watcher-tls-certs-change/1-deploy-with-tlse | wL4XpOQoqp0ElzP3DPP1FUOAnx7ctKdash5DUzBT+84oA2Hab8yd3D9RWVXgYTYP' - ' logger.go:42: 07:00:34 | watcher-tls-certs-change/1-deploy-with-tlse | tgSvfD60T1dJeiiJa5kBj0Gfj+0gKtYNueYrOo65hlC6Smd2tP/7av+xLtQ2b1JS' - ' logger.go:42: 07:00:34 | watcher-tls-certs-change/1-deploy-with-tlse | tFAWNznSY+2n+8SyIH6jjDb/oBYfANPOIcfU/4X8dTGzLgkpMMXK3KyXfzgD9EW+' - ' logger.go:42: 07:00:34 | watcher-tls-certs-change/1-deploy-with-tlse | HexV+da9g5Ix/I5gxku5bd0DWz/ovW+1o/uYMgB5ss9oqd8YKKSqkptXfAtoGToA' - ' logger.go:42: 07:00:34 | watcher-tls-certs-change/1-deploy-with-tlse | KG8QccenVwR7kEcCAwEAAaOBjzCBjDAOBgNVHQ8BAf8EBAMCBaAwEwYDVR0lBAww' - ' logger.go:42: 07:00:34 | watcher-tls-certs-change/1-deploy-with-tlse | CgYIKwYBBQUHAwEwDAYDVR0TAQH/BAIwADAfBgNVHSMEGDAWgBSy3xzAKuvmrfpX' - ' logger.go:42: 07:00:34 | watcher-tls-certs-change/1-deploy-with-tlse | EgmGJVm/ieLg8TA2BgNVHREBAf8ELDAqgih3YXRjaGVyLXB1YmxpYy53YXRjaGVy' - ' logger.go:42: 07:00:34 | watcher-tls-certs-change/1-deploy-with-tlse | LWt1dHRsLWRlZmF1bHQuc3ZjMA0GCSqGSIb3DQEBDAUAA4IBgQBmitaOVryjB1Fy' - ' logger.go:42: 07:00:34 | watcher-tls-certs-change/1-deploy-with-tlse | o+uQ7q1CfFEXu4uk57VffzLi6GFAot2u9gqfKm5Bx/JFKe0HvUHxfyum8oGhg8Mm' - ' logger.go:42: 07:00:34 | watcher-tls-certs-change/1-deploy-with-tlse | lF7bosLeepyWZeFYbXLZ3eCDrTyQRPPc8j6O8R9Ygg2wvIhwIrADK1/KTQuPVFW5' - ' logger.go:42: 07:00:34 | watcher-tls-certs-change/1-deploy-with-tlse | DVaC7MERErbmczp6DYiELdgcqlJimK3Vp/11qNtk1Kc7wVLI2Y36CzhwVMXOBW00' - ' logger.go:42: 07:00:34 | watcher-tls-certs-change/1-deploy-with-tlse | XXQxvYoWLH8WjhVMN9L0ZfTLdQIwL4gt0B3R7Uh8HR/qMCwW4RhIByG/el6wH1yh' - ' logger.go:42: 07:00:34 | watcher-tls-certs-change/1-deploy-with-tlse | D5gkY2Jkbf1vvmmw6MBLkx5O8aL145sW8fDKMGrmRj+eO5ALLeYMdmQZXqwZ4URw' - ' logger.go:42: 07:00:34 | watcher-tls-certs-change/1-deploy-with-tlse | X1tyjzN1ntrcBJf6xnuMLfT1v9RPMCT4CPeb23M5lqvwQXdrbiarkLQ9n/mBPUA6' - ' logger.go:42: 07:00:34 | watcher-tls-certs-change/1-deploy-with-tlse | lUYWBkFqYg9XnAgkWTeehQ7+AeOtvhj6gu3st6nEkOjJs9ri3Jris8kGL9Ay5yMy' - ' logger.go:42: 07:00:34 | watcher-tls-certs-change/1-deploy-with-tlse | af6ByIVzqJuVxr9GJiFIvhx+t2NEelQUiFnN1465Q67x/AELrfo=' - ' logger.go:42: 07:00:34 | watcher-tls-certs-change/1-deploy-with-tlse | -----END CERTIFICATE-----'' ''!='' ''-----BEGIN CERTIFICATE-----' - ' logger.go:42: 07:00:34 | watcher-tls-certs-change/1-deploy-with-tlse | MIIDsjCCAhqgAwIBAgIQFVNmGwlW4BZq0NMjqP97cTANBgkqhkiG9w0BAQwFADAY' - ' logger.go:42: 07:00:34 | watcher-tls-certs-change/1-deploy-with-tlse | MRYwFAYDVQQDEw1yb290Y2EtcHVibGljMB4XDTI2MDEyMjA3MDAwN1oXDTMxMDEy' - ' logger.go:42: 07:00:34 | watcher-tls-certs-change/1-deploy-with-tlse | MTA3MDAwN1owADCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAMpNXlNH' - ' logger.go:42: 07:00:34 | watcher-tls-certs-change/1-deploy-with-tlse | +PgDkPuiPtY2Iy+iOVTneCp31gtcOAI8JIp8fgCvk7KwdimyxIUQ2cYxsZPnG0cy' - ' logger.go:42: 07:00:34 | watcher-tls-certs-change/1-deploy-with-tlse | wL4XpOQoqp0ElzP3DPP1FUOAnx7ctKdash5DUzBT+84oA2Hab8yd3D9RWVXgYTYP' - ' logger.go:42: 07:00:34 | watcher-tls-certs-change/1-deploy-with-tlse | tgSvfD60T1dJeiiJa5kBj0Gfj+0gKtYNueYrOo65hlC6Smd2tP/7av+xLtQ2b1JS' - ' logger.go:42: 07:00:34 | watcher-tls-certs-change/1-deploy-with-tlse | tFAWNznSY+2n+8SyIH6jjDb/oBYfANPOIcfU/4X8dTGzLgkpMMXK3KyXfzgD9EW+' - ' logger.go:42: 07:00:34 | watcher-tls-certs-change/1-deploy-with-tlse | HexV+da9g5Ix/I5gxku5bd0DWz/ovW+1o/uYMgB5ss9oqd8YKKSqkptXfAtoGToA' - ' logger.go:42: 07:00:34 | watcher-tls-certs-change/1-deploy-with-tlse | KG8QccenVwR7kEcCAwEAAaOBjzCBjDAOBgNVHQ8BAf8EBAMCBaAwEwYDVR0lBAww' - ' logger.go:42: 07:00:34 | watcher-tls-certs-change/1-deploy-with-tlse | CgYIKwYBBQUHAwEwDAYDVR0TAQH/BAIwADAfBgNVHSMEGDAWgBSy3xzAKuvmrfpX' - ' logger.go:42: 07:00:34 | watcher-tls-certs-change/1-deploy-with-tlse | EgmGJVm/ieLg8TA2BgNVHREBAf8ELDAqgih3YXRjaGVyLXB1YmxpYy53YXRjaGVy' - ' logger.go:42: 07:00:34 | watcher-tls-certs-change/1-deploy-with-tlse | LWt1dHRsLWRlZmF1bHQuc3ZjMA0GCSqGSIb3DQEBDAUAA4IBgQBmitaOVryjB1Fy' - ' logger.go:42: 07:00:34 | watcher-tls-certs-change/1-deploy-with-tlse | o+uQ7q1CfFEXu4uk57VffzLi6GFAot2u9gqfKm5Bx/JFKe0HvUHxfyum8oGhg8Mm' - ' logger.go:42: 07:00:34 | watcher-tls-certs-change/1-deploy-with-tlse | lF7bosLeepyWZeFYbXLZ3eCDrTyQRPPc8j6O8R9Ygg2wvIhwIrADK1/KTQuPVFW5' - ' logger.go:42: 07:00:34 | watcher-tls-certs-change/1-deploy-with-tlse | DVaC7MERErbmczp6DYiELdgcqlJimK3Vp/11qNtk1Kc7wVLI2Y36CzhwVMXOBW00' - ' logger.go:42: 07:00:34 | watcher-tls-certs-change/1-deploy-with-tlse | XXQxvYoWLH8WjhVMN9L0ZfTLdQIwL4gt0B3R7Uh8HR/qMCwW4RhIByG/el6wH1yh' - ' logger.go:42: 07:00:34 | watcher-tls-certs-change/1-deploy-with-tlse | D5gkY2Jkbf1vvmmw6MBLkx5O8aL145sW8fDKMGrmRj+eO5ALLeYMdmQZXqwZ4URw' - ' logger.go:42: 07:00:34 | watcher-tls-certs-change/1-deploy-with-tlse | X1tyjzN1ntrcBJf6xnuMLfT1v9RPMCT4CPeb23M5lqvwQXdrbiarkLQ9n/mBPUA6' - ' logger.go:42: 07:00:34 | watcher-tls-certs-change/1-deploy-with-tlse | lUYWBkFqYg9XnAgkWTeehQ7+AeOtvhj6gu3st6nEkOjJs9ri3Jris8kGL9Ay5yMy' - ' logger.go:42: 07:00:34 | watcher-tls-certs-change/1-deploy-with-tlse | af6ByIVzqJuVxr9GJiFIvhx+t2NEelQUiFnN1465Q67x/AELrfo=' - ' logger.go:42: 07:00:34 | watcher-tls-certs-change/1-deploy-with-tlse | -----END CERTIFICATE-----'' '']''' - ' logger.go:42: 07:00:34 | watcher-tls-certs-change/1-deploy-with-tlse | ++ oc rsh -n watcher-kuttl-default -c watcher-api watcher-kuttl-api-0 cat /etc/pki/tls/certs/internal.crt' - ' logger.go:42: 07:00:35 | watcher-tls-certs-change/1-deploy-with-tlse | + internal_svc_cert=''-----BEGIN CERTIFICATE-----' - ' logger.go:42: 07:00:35 | watcher-tls-certs-change/1-deploy-with-tlse | MIIDtzCCAh+gAwIBAgIRAMdtA66/UVErgQ6jB2Mo7g8wDQYJKoZIhvcNAQEMBQAw' - ' logger.go:42: 07:00:35 | watcher-tls-certs-change/1-deploy-with-tlse | GjEYMBYGA1UEAxMPcm9vdGNhLWludGVybmFsMB4XDTI2MDEyMjA3MDAwN1oXDTMx' - ' logger.go:42: 07:00:35 | watcher-tls-certs-change/1-deploy-with-tlse | MDEyMTA3MDAwN1owADCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAOzY' - ' logger.go:42: 07:00:35 | watcher-tls-certs-change/1-deploy-with-tlse | hUVeRLlVB60sJbdpaQAJI2vpbeEEDa91MsWoYlWF58nvtb17k8KGvpPNemDF4UvD' - ' logger.go:42: 07:00:35 | watcher-tls-certs-change/1-deploy-with-tlse | kRm+oSlbUhWwlbwI1Q3VFz4JhvU12PvdUhV4e5PbNGULUDGpAbr04w098HEQSUj+' - ' logger.go:42: 07:00:35 | watcher-tls-certs-change/1-deploy-with-tlse | D/44rQFA0PbvLa//wISJcF6NG2kZiiAFAlrfYJH5tJM2nbdDOvHRY2FScNwLw3aE' - ' logger.go:42: 07:00:35 | watcher-tls-certs-change/1-deploy-with-tlse | 1UGIp9tmMA1on0l+9tBcndNzSq8klq5TdwDo8Pwrqz8m37ZXvwD1gizzDXwZhiPN' - ' logger.go:42: 07:00:35 | watcher-tls-certs-change/1-deploy-with-tlse | KIvLKp0hAxvswvj5Rib80zGQX6JjJEcOBFZ4GT/e8WUjrsJ6JsTygAtF9Y8xwchO' - ' logger.go:42: 07:00:35 | watcher-tls-certs-change/1-deploy-with-tlse | Ce4xdKeeGZ8hDK2ojFcCAwEAAaOBkTCBjjAOBgNVHQ8BAf8EBAMCBaAwEwYDVR0l' - ' logger.go:42: 07:00:35 | watcher-tls-certs-change/1-deploy-with-tlse | BAwwCgYIKwYBBQUHAwEwDAYDVR0TAQH/BAIwADAfBgNVHSMEGDAWgBSEvgxWYqXo' - ' logger.go:42: 07:00:35 | watcher-tls-certs-change/1-deploy-with-tlse | VaasRUQLJzZAsO/HJTA4BgNVHREBAf8ELjAsgip3YXRjaGVyLWludGVybmFsLndh' - ' logger.go:42: 07:00:35 | watcher-tls-certs-change/1-deploy-with-tlse | dGNoZXIta3V0dGwtZGVmYXVsdC5zdmMwDQYJKoZIhvcNAQEMBQADggGBAJzAl8kI' - ' logger.go:42: 07:00:35 | watcher-tls-certs-change/1-deploy-with-tlse | fRCmnFBnPIq9+3XfefG0pybCCtUusBuoATJYDHmH/379Rb4zeN6bqXg7PP36qVao' - ' logger.go:42: 07:00:35 | watcher-tls-certs-change/1-deploy-with-tlse | 98lAd5LFEieOBPJq2+QXo8CWrLcbItzbOHdFeVStfpuQqaD2OMYLLepP/IpAKT+0' - ' logger.go:42: 07:00:35 | watcher-tls-certs-change/1-deploy-with-tlse | +GEHqjJsaHF3xj6g5HcvQzhSaQh3Pg8ffKmqJKQg95PF1BPQ110MawKYh9MBibPr' - ' logger.go:42: 07:00:35 | watcher-tls-certs-change/1-deploy-with-tlse | b0myrALIel2CeIeWqm+VMrMI8en/gFxJWTfg58GcQbRiUDzWa2BHE1mEoXmOLNDt' - ' logger.go:42: 07:00:35 | watcher-tls-certs-change/1-deploy-with-tlse | L1iYWnFugdfAycs2+uAkPMbpav3Oi3WJX1N1iKRklNv5Cq8oQ++b8aoR2RZPnJJE' - ' logger.go:42: 07:00:35 | watcher-tls-certs-change/1-deploy-with-tlse | ppmkEBU4Kt94NO5B/A2PnjkM8+yOQjhjVnyD5Ccp8kHx0MN7H1OW9J0m4JEPjP/c' - ' logger.go:42: 07:00:35 | watcher-tls-certs-change/1-deploy-with-tlse | uRTxWb4yoJl6Igdfy3tYgldteKztKgf4tDlkR/Htsf8GVL+zbsnTseYY5BakyygP' - ' logger.go:42: 07:00:35 | watcher-tls-certs-change/1-deploy-with-tlse | AHnxS0D1QCA5Zv5craVd+a2ZCNvbnSdl4pwShL796A0cNZGnAKDD1LwcYQ==' - ' logger.go:42: 07:00:35 | watcher-tls-certs-change/1-deploy-with-tlse | -----END CERTIFICATE-----''' - ' logger.go:42: 07:00:35 | watcher-tls-certs-change/1-deploy-with-tlse | ++ oc get -n watcher-kuttl-default secret cert-watcher-internal-svc -o ''jsonpath={.data.tls\.crt}''' - ' logger.go:42: 07:00:35 | watcher-tls-certs-change/1-deploy-with-tlse | ++ base64 --decode' - ' logger.go:42: 07:00:35 | watcher-tls-certs-change/1-deploy-with-tlse | + internal_secret_cert=''-----BEGIN CERTIFICATE-----' - ' logger.go:42: 07:00:35 | watcher-tls-certs-change/1-deploy-with-tlse | MIIDtzCCAh+gAwIBAgIRAMdtA66/UVErgQ6jB2Mo7g8wDQYJKoZIhvcNAQEMBQAw' - ' logger.go:42: 07:00:35 | watcher-tls-certs-change/1-deploy-with-tlse | GjEYMBYGA1UEAxMPcm9vdGNhLWludGVybmFsMB4XDTI2MDEyMjA3MDAwN1oXDTMx' - ' logger.go:42: 07:00:35 | watcher-tls-certs-change/1-deploy-with-tlse | MDEyMTA3MDAwN1owADCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAOzY' - ' logger.go:42: 07:00:35 | watcher-tls-certs-change/1-deploy-with-tlse | hUVeRLlVB60sJbdpaQAJI2vpbeEEDa91MsWoYlWF58nvtb17k8KGvpPNemDF4UvD' - ' logger.go:42: 07:00:35 | watcher-tls-certs-change/1-deploy-with-tlse | kRm+oSlbUhWwlbwI1Q3VFz4JhvU12PvdUhV4e5PbNGULUDGpAbr04w098HEQSUj+' - ' logger.go:42: 07:00:35 | watcher-tls-certs-change/1-deploy-with-tlse | D/44rQFA0PbvLa//wISJcF6NG2kZiiAFAlrfYJH5tJM2nbdDOvHRY2FScNwLw3aE' - ' logger.go:42: 07:00:35 | watcher-tls-certs-change/1-deploy-with-tlse | 1UGIp9tmMA1on0l+9tBcndNzSq8klq5TdwDo8Pwrqz8m37ZXvwD1gizzDXwZhiPN' - ' logger.go:42: 07:00:35 | watcher-tls-certs-change/1-deploy-with-tlse | KIvLKp0hAxvswvj5Rib80zGQX6JjJEcOBFZ4GT/e8WUjrsJ6JsTygAtF9Y8xwchO' - ' logger.go:42: 07:00:35 | watcher-tls-certs-change/1-deploy-with-tlse | Ce4xdKeeGZ8hDK2ojFcCAwEAAaOBkTCBjjAOBgNVHQ8BAf8EBAMCBaAwEwYDVR0l' - ' logger.go:42: 07:00:35 | watcher-tls-certs-change/1-deploy-with-tlse | BAwwCgYIKwYBBQUHAwEwDAYDVR0TAQH/BAIwADAfBgNVHSMEGDAWgBSEvgxWYqXo' - ' logger.go:42: 07:00:35 | watcher-tls-certs-change/1-deploy-with-tlse | VaasRUQLJzZAsO/HJTA4BgNVHREBAf8ELjAsgip3YXRjaGVyLWludGVybmFsLndh' - ' logger.go:42: 07:00:35 | watcher-tls-certs-change/1-deploy-with-tlse | dGNoZXIta3V0dGwtZGVmYXVsdC5zdmMwDQYJKoZIhvcNAQEMBQADggGBAJzAl8kI' - ' logger.go:42: 07:00:35 | watcher-tls-certs-change/1-deploy-with-tlse | fRCmnFBnPIq9+3XfefG0pybCCtUusBuoATJYDHmH/379Rb4zeN6bqXg7PP36qVao' - ' logger.go:42: 07:00:35 | watcher-tls-certs-change/1-deploy-with-tlse | 98lAd5LFEieOBPJq2+QXo8CWrLcbItzbOHdFeVStfpuQqaD2OMYLLepP/IpAKT+0' - ' logger.go:42: 07:00:35 | watcher-tls-certs-change/1-deploy-with-tlse | +GEHqjJsaHF3xj6g5HcvQzhSaQh3Pg8ffKmqJKQg95PF1BPQ110MawKYh9MBibPr' - ' logger.go:42: 07:00:35 | watcher-tls-certs-change/1-deploy-with-tlse | b0myrALIel2CeIeWqm+VMrMI8en/gFxJWTfg58GcQbRiUDzWa2BHE1mEoXmOLNDt' - ' logger.go:42: 07:00:35 | watcher-tls-certs-change/1-deploy-with-tlse | L1iYWnFugdfAycs2+uAkPMbpav3Oi3WJX1N1iKRklNv5Cq8oQ++b8aoR2RZPnJJE' - ' logger.go:42: 07:00:35 | watcher-tls-certs-change/1-deploy-with-tlse | ppmkEBU4Kt94NO5B/A2PnjkM8+yOQjhjVnyD5Ccp8kHx0MN7H1OW9J0m4JEPjP/c' - ' logger.go:42: 07:00:35 | watcher-tls-certs-change/1-deploy-with-tlse | uRTxWb4yoJl6Igdfy3tYgldteKztKgf4tDlkR/Htsf8GVL+zbsnTseYY5BakyygP' - ' logger.go:42: 07:00:35 | watcher-tls-certs-change/1-deploy-with-tlse | AHnxS0D1QCA5Zv5craVd+a2ZCNvbnSdl4pwShL796A0cNZGnAKDD1LwcYQ==' - ' logger.go:42: 07:00:35 | watcher-tls-certs-change/1-deploy-with-tlse | -----END CERTIFICATE-----''' - ' logger.go:42: 07:00:35 | watcher-tls-certs-change/1-deploy-with-tlse | + ''['' ''-----BEGIN CERTIFICATE-----' - ' logger.go:42: 07:00:35 | watcher-tls-certs-change/1-deploy-with-tlse | MIIDtzCCAh+gAwIBAgIRAMdtA66/UVErgQ6jB2Mo7g8wDQYJKoZIhvcNAQEMBQAw' - ' logger.go:42: 07:00:35 | watcher-tls-certs-change/1-deploy-with-tlse | GjEYMBYGA1UEAxMPcm9vdGNhLWludGVybmFsMB4XDTI2MDEyMjA3MDAwN1oXDTMx' - ' logger.go:42: 07:00:35 | watcher-tls-certs-change/1-deploy-with-tlse | MDEyMTA3MDAwN1owADCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAOzY' - ' logger.go:42: 07:00:35 | watcher-tls-certs-change/1-deploy-with-tlse | hUVeRLlVB60sJbdpaQAJI2vpbeEEDa91MsWoYlWF58nvtb17k8KGvpPNemDF4UvD' - ' logger.go:42: 07:00:35 | watcher-tls-certs-change/1-deploy-with-tlse | kRm+oSlbUhWwlbwI1Q3VFz4JhvU12PvdUhV4e5PbNGULUDGpAbr04w098HEQSUj+' - ' logger.go:42: 07:00:35 | watcher-tls-certs-change/1-deploy-with-tlse | D/44rQFA0PbvLa//wISJcF6NG2kZiiAFAlrfYJH5tJM2nbdDOvHRY2FScNwLw3aE' - ' logger.go:42: 07:00:35 | watcher-tls-certs-change/1-deploy-with-tlse | 1UGIp9tmMA1on0l+9tBcndNzSq8klq5TdwDo8Pwrqz8m37ZXvwD1gizzDXwZhiPN' - ' logger.go:42: 07:00:35 | watcher-tls-certs-change/1-deploy-with-tlse | KIvLKp0hAxvswvj5Rib80zGQX6JjJEcOBFZ4GT/e8WUjrsJ6JsTygAtF9Y8xwchO' - ' logger.go:42: 07:00:35 | watcher-tls-certs-change/1-deploy-with-tlse | Ce4xdKeeGZ8hDK2ojFcCAwEAAaOBkTCBjjAOBgNVHQ8BAf8EBAMCBaAwEwYDVR0l' - ' logger.go:42: 07:00:35 | watcher-tls-certs-change/1-deploy-with-tlse | BAwwCgYIKwYBBQUHAwEwDAYDVR0TAQH/BAIwADAfBgNVHSMEGDAWgBSEvgxWYqXo' - ' logger.go:42: 07:00:35 | watcher-tls-certs-change/1-deploy-with-tlse | VaasRUQLJzZAsO/HJTA4BgNVHREBAf8ELjAsgip3YXRjaGVyLWludGVybmFsLndh' - ' logger.go:42: 07:00:35 | watcher-tls-certs-change/1-deploy-with-tlse | dGNoZXIta3V0dGwtZGVmYXVsdC5zdmMwDQYJKoZIhvcNAQEMBQADggGBAJzAl8kI' - ' logger.go:42: 07:00:35 | watcher-tls-certs-change/1-deploy-with-tlse | fRCmnFBnPIq9+3XfefG0pybCCtUusBuoATJYDHmH/379Rb4zeN6bqXg7PP36qVao' - ' logger.go:42: 07:00:35 | watcher-tls-certs-change/1-deploy-with-tlse | 98lAd5LFEieOBPJq2+QXo8CWrLcbItzbOHdFeVStfpuQqaD2OMYLLepP/IpAKT+0' - ' logger.go:42: 07:00:35 | watcher-tls-certs-change/1-deploy-with-tlse | +GEHqjJsaHF3xj6g5HcvQzhSaQh3Pg8ffKmqJKQg95PF1BPQ110MawKYh9MBibPr' - ' logger.go:42: 07:00:35 | watcher-tls-certs-change/1-deploy-with-tlse | b0myrALIel2CeIeWqm+VMrMI8en/gFxJWTfg58GcQbRiUDzWa2BHE1mEoXmOLNDt' - ' logger.go:42: 07:00:35 | watcher-tls-certs-change/1-deploy-with-tlse | L1iYWnFugdfAycs2+uAkPMbpav3Oi3WJX1N1iKRklNv5Cq8oQ++b8aoR2RZPnJJE' - ' logger.go:42: 07:00:35 | watcher-tls-certs-change/1-deploy-with-tlse | ppmkEBU4Kt94NO5B/A2PnjkM8+yOQjhjVnyD5Ccp8kHx0MN7H1OW9J0m4JEPjP/c' - ' logger.go:42: 07:00:35 | watcher-tls-certs-change/1-deploy-with-tlse | uRTxWb4yoJl6Igdfy3tYgldteKztKgf4tDlkR/Htsf8GVL+zbsnTseYY5BakyygP' - ' logger.go:42: 07:00:35 | watcher-tls-certs-change/1-deploy-with-tlse | AHnxS0D1QCA5Zv5craVd+a2ZCNvbnSdl4pwShL796A0cNZGnAKDD1LwcYQ==' - ' logger.go:42: 07:00:35 | watcher-tls-certs-change/1-deploy-with-tlse | -----END CERTIFICATE-----'' ''!='' ''-----BEGIN CERTIFICATE-----' - ' logger.go:42: 07:00:35 | watcher-tls-certs-change/1-deploy-with-tlse | MIIDtzCCAh+gAwIBAgIRAMdtA66/UVErgQ6jB2Mo7g8wDQYJKoZIhvcNAQEMBQAw' - ' logger.go:42: 07:00:35 | watcher-tls-certs-change/1-deploy-with-tlse | GjEYMBYGA1UEAxMPcm9vdGNhLWludGVybmFsMB4XDTI2MDEyMjA3MDAwN1oXDTMx' - ' logger.go:42: 07:00:35 | watcher-tls-certs-change/1-deploy-with-tlse | MDEyMTA3MDAwN1owADCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAOzY' - ' logger.go:42: 07:00:35 | watcher-tls-certs-change/1-deploy-with-tlse | hUVeRLlVB60sJbdpaQAJI2vpbeEEDa91MsWoYlWF58nvtb17k8KGvpPNemDF4UvD' - ' logger.go:42: 07:00:35 | watcher-tls-certs-change/1-deploy-with-tlse | kRm+oSlbUhWwlbwI1Q3VFz4JhvU12PvdUhV4e5PbNGULUDGpAbr04w098HEQSUj+' - ' logger.go:42: 07:00:35 | watcher-tls-certs-change/1-deploy-with-tlse | D/44rQFA0PbvLa//wISJcF6NG2kZiiAFAlrfYJH5tJM2nbdDOvHRY2FScNwLw3aE' - ' logger.go:42: 07:00:35 | watcher-tls-certs-change/1-deploy-with-tlse | 1UGIp9tmMA1on0l+9tBcndNzSq8klq5TdwDo8Pwrqz8m37ZXvwD1gizzDXwZhiPN' - ' logger.go:42: 07:00:35 | watcher-tls-certs-change/1-deploy-with-tlse | KIvLKp0hAxvswvj5Rib80zGQX6JjJEcOBFZ4GT/e8WUjrsJ6JsTygAtF9Y8xwchO' - ' logger.go:42: 07:00:35 | watcher-tls-certs-change/1-deploy-with-tlse | Ce4xdKeeGZ8hDK2ojFcCAwEAAaOBkTCBjjAOBgNVHQ8BAf8EBAMCBaAwEwYDVR0l' - ' logger.go:42: 07:00:35 | watcher-tls-certs-change/1-deploy-with-tlse | BAwwCgYIKwYBBQUHAwEwDAYDVR0TAQH/BAIwADAfBgNVHSMEGDAWgBSEvgxWYqXo' - ' logger.go:42: 07:00:35 | watcher-tls-certs-change/1-deploy-with-tlse | VaasRUQLJzZAsO/HJTA4BgNVHREBAf8ELjAsgip3YXRjaGVyLWludGVybmFsLndh' - ' logger.go:42: 07:00:35 | watcher-tls-certs-change/1-deploy-with-tlse | dGNoZXIta3V0dGwtZGVmYXVsdC5zdmMwDQYJKoZIhvcNAQEMBQADggGBAJzAl8kI' - ' logger.go:42: 07:00:35 | watcher-tls-certs-change/1-deploy-with-tlse | fRCmnFBnPIq9+3XfefG0pybCCtUusBuoATJYDHmH/379Rb4zeN6bqXg7PP36qVao' - ' logger.go:42: 07:00:35 | watcher-tls-certs-change/1-deploy-with-tlse | 98lAd5LFEieOBPJq2+QXo8CWrLcbItzbOHdFeVStfpuQqaD2OMYLLepP/IpAKT+0' - ' logger.go:42: 07:00:35 | watcher-tls-certs-change/1-deploy-with-tlse | +GEHqjJsaHF3xj6g5HcvQzhSaQh3Pg8ffKmqJKQg95PF1BPQ110MawKYh9MBibPr' - ' logger.go:42: 07:00:35 | watcher-tls-certs-change/1-deploy-with-tlse | b0myrALIel2CeIeWqm+VMrMI8en/gFxJWTfg58GcQbRiUDzWa2BHE1mEoXmOLNDt' - ' logger.go:42: 07:00:35 | watcher-tls-certs-change/1-deploy-with-tlse | L1iYWnFugdfAycs2+uAkPMbpav3Oi3WJX1N1iKRklNv5Cq8oQ++b8aoR2RZPnJJE' - ' logger.go:42: 07:00:35 | watcher-tls-certs-change/1-deploy-with-tlse | ppmkEBU4Kt94NO5B/A2PnjkM8+yOQjhjVnyD5Ccp8kHx0MN7H1OW9J0m4JEPjP/c' - ' logger.go:42: 07:00:35 | watcher-tls-certs-change/1-deploy-with-tlse | uRTxWb4yoJl6Igdfy3tYgldteKztKgf4tDlkR/Htsf8GVL+zbsnTseYY5BakyygP' - ' logger.go:42: 07:00:35 | watcher-tls-certs-change/1-deploy-with-tlse | AHnxS0D1QCA5Zv5craVd+a2ZCNvbnSdl4pwShL796A0cNZGnAKDD1LwcYQ==' - ' logger.go:42: 07:00:35 | watcher-tls-certs-change/1-deploy-with-tlse | -----END CERTIFICATE-----'' '']''' - ' logger.go:42: 07:00:35 | watcher-tls-certs-change/1-deploy-with-tlse | test step completed 1-deploy-with-tlse' - ' logger.go:42: 07:00:35 | watcher-tls-certs-change/2-change-public-svc-certificate | starting test step 2-change-public-svc-certificate' - ' logger.go:42: 07:00:35 | watcher-tls-certs-change/2-change-public-svc-certificate | running command: [sh -c oc patch Certificate -n $NAMESPACE watcher-public-svc --type=''json'' -p=''[{"op": "replace", "path": "/spec/dnsNames", "value":[''watcher-public.watcher-kuttl-default.svc'', ''watcher-public.watcher-kuttl-default.svc.cluster.local'']}]''' - ' ]' - ' logger.go:42: 07:00:35 | watcher-tls-certs-change/2-change-public-svc-certificate | certificate.cert-manager.io/watcher-public-svc patched' - ' logger.go:42: 07:00:35 | watcher-tls-certs-change/2-change-public-svc-certificate | running command: [sh -c set -euxo pipefail' - ' svc_cert=$(oc rsh -n $NAMESPACE -c watcher-api watcher-kuttl-api-0 cat /etc/pki/tls/certs/public.crt)' - ' secret_cert=$(oc get -n $NAMESPACE secret cert-watcher-public-svc -o jsonpath=''{.data.tls\.crt}'' | base64 --decode)' - ' # ensure that the svc secret and cert secret match' - ' if [ "${svc_cert}" != "${secret_cert}" ]; then' - ' exit 1' - ' fi' - ' ]' - ' logger.go:42: 07:00:35 | watcher-tls-certs-change/2-change-public-svc-certificate | ++ oc rsh -n watcher-kuttl-default -c watcher-api watcher-kuttl-api-0 cat /etc/pki/tls/certs/public.crt' - ' logger.go:42: 07:00:35 | watcher-tls-certs-change/2-change-public-svc-certificate | + svc_cert=''-----BEGIN CERTIFICATE-----' - ' logger.go:42: 07:00:35 | watcher-tls-certs-change/2-change-public-svc-certificate | MIIDsjCCAhqgAwIBAgIQFVNmGwlW4BZq0NMjqP97cTANBgkqhkiG9w0BAQwFADAY' - ' logger.go:42: 07:00:35 | watcher-tls-certs-change/2-change-public-svc-certificate | MRYwFAYDVQQDEw1yb290Y2EtcHVibGljMB4XDTI2MDEyMjA3MDAwN1oXDTMxMDEy' - ' logger.go:42: 07:00:35 | watcher-tls-certs-change/2-change-public-svc-certificate | MTA3MDAwN1owADCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAMpNXlNH' - ' logger.go:42: 07:00:35 | watcher-tls-certs-change/2-change-public-svc-certificate | +PgDkPuiPtY2Iy+iOVTneCp31gtcOAI8JIp8fgCvk7KwdimyxIUQ2cYxsZPnG0cy' - ' logger.go:42: 07:00:35 | watcher-tls-certs-change/2-change-public-svc-certificate | wL4XpOQoqp0ElzP3DPP1FUOAnx7ctKdash5DUzBT+84oA2Hab8yd3D9RWVXgYTYP' - ' logger.go:42: 07:00:35 | watcher-tls-certs-change/2-change-public-svc-certificate | tgSvfD60T1dJeiiJa5kBj0Gfj+0gKtYNueYrOo65hlC6Smd2tP/7av+xLtQ2b1JS' - ' logger.go:42: 07:00:35 | watcher-tls-certs-change/2-change-public-svc-certificate | tFAWNznSY+2n+8SyIH6jjDb/oBYfANPOIcfU/4X8dTGzLgkpMMXK3KyXfzgD9EW+' - ' logger.go:42: 07:00:35 | watcher-tls-certs-change/2-change-public-svc-certificate | HexV+da9g5Ix/I5gxku5bd0DWz/ovW+1o/uYMgB5ss9oqd8YKKSqkptXfAtoGToA' - ' logger.go:42: 07:00:35 | watcher-tls-certs-change/2-change-public-svc-certificate | KG8QccenVwR7kEcCAwEAAaOBjzCBjDAOBgNVHQ8BAf8EBAMCBaAwEwYDVR0lBAww' - ' logger.go:42: 07:00:35 | watcher-tls-certs-change/2-change-public-svc-certificate | CgYIKwYBBQUHAwEwDAYDVR0TAQH/BAIwADAfBgNVHSMEGDAWgBSy3xzAKuvmrfpX' - ' logger.go:42: 07:00:35 | watcher-tls-certs-change/2-change-public-svc-certificate | EgmGJVm/ieLg8TA2BgNVHREBAf8ELDAqgih3YXRjaGVyLXB1YmxpYy53YXRjaGVy' - ' logger.go:42: 07:00:35 | watcher-tls-certs-change/2-change-public-svc-certificate | LWt1dHRsLWRlZmF1bHQuc3ZjMA0GCSqGSIb3DQEBDAUAA4IBgQBmitaOVryjB1Fy' - ' logger.go:42: 07:00:35 | watcher-tls-certs-change/2-change-public-svc-certificate | o+uQ7q1CfFEXu4uk57VffzLi6GFAot2u9gqfKm5Bx/JFKe0HvUHxfyum8oGhg8Mm' - ' logger.go:42: 07:00:35 | watcher-tls-certs-change/2-change-public-svc-certificate | lF7bosLeepyWZeFYbXLZ3eCDrTyQRPPc8j6O8R9Ygg2wvIhwIrADK1/KTQuPVFW5' - ' logger.go:42: 07:00:35 | watcher-tls-certs-change/2-change-public-svc-certificate | DVaC7MERErbmczp6DYiELdgcqlJimK3Vp/11qNtk1Kc7wVLI2Y36CzhwVMXOBW00' - ' logger.go:42: 07:00:35 | watcher-tls-certs-change/2-change-public-svc-certificate | XXQxvYoWLH8WjhVMN9L0ZfTLdQIwL4gt0B3R7Uh8HR/qMCwW4RhIByG/el6wH1yh' - ' logger.go:42: 07:00:35 | watcher-tls-certs-change/2-change-public-svc-certificate | D5gkY2Jkbf1vvmmw6MBLkx5O8aL145sW8fDKMGrmRj+eO5ALLeYMdmQZXqwZ4URw' - ' logger.go:42: 07:00:35 | watcher-tls-certs-change/2-change-public-svc-certificate | X1tyjzN1ntrcBJf6xnuMLfT1v9RPMCT4CPeb23M5lqvwQXdrbiarkLQ9n/mBPUA6' - ' logger.go:42: 07:00:35 | watcher-tls-certs-change/2-change-public-svc-certificate | lUYWBkFqYg9XnAgkWTeehQ7+AeOtvhj6gu3st6nEkOjJs9ri3Jris8kGL9Ay5yMy' - ' logger.go:42: 07:00:35 | watcher-tls-certs-change/2-change-public-svc-certificate | af6ByIVzqJuVxr9GJiFIvhx+t2NEelQUiFnN1465Q67x/AELrfo=' - ' logger.go:42: 07:00:35 | watcher-tls-certs-change/2-change-public-svc-certificate | -----END CERTIFICATE-----''' - ' logger.go:42: 07:00:35 | watcher-tls-certs-change/2-change-public-svc-certificate | ++ oc get -n watcher-kuttl-default secret cert-watcher-public-svc -o ''jsonpath={.data.tls\.crt}''' - ' logger.go:42: 07:00:35 | watcher-tls-certs-change/2-change-public-svc-certificate | ++ base64 --decode' - ' logger.go:42: 07:00:35 | watcher-tls-certs-change/2-change-public-svc-certificate | + secret_cert=''-----BEGIN CERTIFICATE-----' - ' logger.go:42: 07:00:35 | watcher-tls-certs-change/2-change-public-svc-certificate | MIID6jCCAlKgAwIBAgIQaET3jSKb03pJAQEOx1sP8jANBgkqhkiG9w0BAQwFADAY' - ' logger.go:42: 07:00:35 | watcher-tls-certs-change/2-change-public-svc-certificate | MRYwFAYDVQQDEw1yb290Y2EtcHVibGljMB4XDTI2MDEyMjA3MDAzNVoXDTMxMDEy' - ' logger.go:42: 07:00:35 | watcher-tls-certs-change/2-change-public-svc-certificate | MTA3MDAzNVowADCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAMUO91uC' - ' logger.go:42: 07:00:35 | watcher-tls-certs-change/2-change-public-svc-certificate | WtApJP220eftPqzZX3iJmtM6v4caj75kfMa8Re/FN4CY0xX684UKfFkl5wGSnbEA' - ' logger.go:42: 07:00:35 | watcher-tls-certs-change/2-change-public-svc-certificate | nB6ivGgBkvlfLhUaru5eP2XnPTnbN54YF5mW2xWMO5nGZ4k07Q/FWqSeLhZSf6yY' - ' logger.go:42: 07:00:35 | watcher-tls-certs-change/2-change-public-svc-certificate | c7Ifn/QckjDTU4J0B3W5MOlSHZbyV4e5BcBloR2akGH2R74SQTx/v4BQbCRzeGGl' - ' logger.go:42: 07:00:35 | watcher-tls-certs-change/2-change-public-svc-certificate | adfMoWrCWUBRsSxhG/evB0XoWp0xeiOQoAndabr4auk1XzBqQyLwenOU+CivlIC7' - ' logger.go:42: 07:00:35 | watcher-tls-certs-change/2-change-public-svc-certificate | +81MamYtHDXZdqly0FQCjSx5l3BzAOQRsAgHbYp//gzb28KOdevORaT2xe/EQuda' - ' logger.go:42: 07:00:35 | watcher-tls-certs-change/2-change-public-svc-certificate | gqgHYmSQTwJ4EscCAwEAAaOBxzCBxDAOBgNVHQ8BAf8EBAMCBaAwEwYDVR0lBAww' - ' logger.go:42: 07:00:35 | watcher-tls-certs-change/2-change-public-svc-certificate | CgYIKwYBBQUHAwEwDAYDVR0TAQH/BAIwADAfBgNVHSMEGDAWgBSy3xzAKuvmrfpX' - ' logger.go:42: 07:00:35 | watcher-tls-certs-change/2-change-public-svc-certificate | EgmGJVm/ieLg8TBuBgNVHREBAf8EZDBigih3YXRjaGVyLXB1YmxpYy53YXRjaGVy' - ' logger.go:42: 07:00:35 | watcher-tls-certs-change/2-change-public-svc-certificate | LWt1dHRsLWRlZmF1bHQuc3ZjgjZ3YXRjaGVyLXB1YmxpYy53YXRjaGVyLWt1dHRs' - ' logger.go:42: 07:00:35 | watcher-tls-certs-change/2-change-public-svc-certificate | LWRlZmF1bHQuc3ZjLmNsdXN0ZXIubG9jYWwwDQYJKoZIhvcNAQEMBQADggGBACpS' - ' logger.go:42: 07:00:35 | watcher-tls-certs-change/2-change-public-svc-certificate | JbVH3n79slClC6DdMUQHLtnvKYoL33kgMsiwf3P6qByavtbQn8lPJyD1m/xsrRGs' - ' logger.go:42: 07:00:35 | watcher-tls-certs-change/2-change-public-svc-certificate | 3uBsjnZwn/rMyTOFYv9sxO/5PkOiTHmsCW0Q9jkMzI/WycjZCnAlMX9+LRj7Dl8e' - ' logger.go:42: 07:00:35 | watcher-tls-certs-change/2-change-public-svc-certificate | d6eyo3wukodG1taKDTVGqlmuv2mGd6Xf506cK2/mf/w556quUaYi9N2e4YY1BNly' - ' logger.go:42: 07:00:35 | watcher-tls-certs-change/2-change-public-svc-certificate | ynm3VjHEyuigghb7wbMzwo+FhmxyYBVYlekM7bjhGbxmBycVFc5WvkSdV8MqGksj' - ' logger.go:42: 07:00:35 | watcher-tls-certs-change/2-change-public-svc-certificate | tjF7CNFK+/KxETpnxRQCnAXEsTjjHnc2yc35qR3GdT3353LSejPl3HONjc+IfQCC' - ' logger.go:42: 07:00:35 | watcher-tls-certs-change/2-change-public-svc-certificate | grgIUD2qGizLxPAfdSCnaZZKjryCJZ0+CuAI1K8RsxvLn3DT7H8GQ74ZnA8X9l7e' - ' logger.go:42: 07:00:35 | watcher-tls-certs-change/2-change-public-svc-certificate | u37MJKf/IEW0RK82TTgxAMZpZKgCnz47NTqXlYPZUnvX85DxWuCfakyvObVyFIif' - ' logger.go:42: 07:00:35 | watcher-tls-certs-change/2-change-public-svc-certificate | OsRjhko6IWIMauEaPp2blS9MFbTOAMCM2NzpXGE9gyJxekasxuv5Ifq3TBk/oA==' - ' logger.go:42: 07:00:35 | watcher-tls-certs-change/2-change-public-svc-certificate | -----END CERTIFICATE-----''' - ' logger.go:42: 07:00:35 | watcher-tls-certs-change/2-change-public-svc-certificate | + ''['' ''-----BEGIN CERTIFICATE-----' - ' logger.go:42: 07:00:35 | watcher-tls-certs-change/2-change-public-svc-certificate | MIIDsjCCAhqgAwIBAgIQFVNmGwlW4BZq0NMjqP97cTANBgkqhkiG9w0BAQwFADAY' - ' logger.go:42: 07:00:35 | watcher-tls-certs-change/2-change-public-svc-certificate | MRYwFAYDVQQDEw1yb290Y2EtcHVibGljMB4XDTI2MDEyMjA3MDAwN1oXDTMxMDEy' - ' logger.go:42: 07:00:35 | watcher-tls-certs-change/2-change-public-svc-certificate | MTA3MDAwN1owADCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAMpNXlNH' - ' logger.go:42: 07:00:35 | watcher-tls-certs-change/2-change-public-svc-certificate | +PgDkPuiPtY2Iy+iOVTneCp31gtcOAI8JIp8fgCvk7KwdimyxIUQ2cYxsZPnG0cy' - ' logger.go:42: 07:00:35 | watcher-tls-certs-change/2-change-public-svc-certificate | wL4XpOQoqp0ElzP3DPP1FUOAnx7ctKdash5DUzBT+84oA2Hab8yd3D9RWVXgYTYP' - ' logger.go:42: 07:00:35 | watcher-tls-certs-change/2-change-public-svc-certificate | tgSvfD60T1dJeiiJa5kBj0Gfj+0gKtYNueYrOo65hlC6Smd2tP/7av+xLtQ2b1JS' - ' logger.go:42: 07:00:35 | watcher-tls-certs-change/2-change-public-svc-certificate | tFAWNznSY+2n+8SyIH6jjDb/oBYfANPOIcfU/4X8dTGzLgkpMMXK3KyXfzgD9EW+' - ' logger.go:42: 07:00:35 | watcher-tls-certs-change/2-change-public-svc-certificate | HexV+da9g5Ix/I5gxku5bd0DWz/ovW+1o/uYMgB5ss9oqd8YKKSqkptXfAtoGToA' - ' logger.go:42: 07:00:35 | watcher-tls-certs-change/2-change-public-svc-certificate | KG8QccenVwR7kEcCAwEAAaOBjzCBjDAOBgNVHQ8BAf8EBAMCBaAwEwYDVR0lBAww' - ' logger.go:42: 07:00:35 | watcher-tls-certs-change/2-change-public-svc-certificate | CgYIKwYBBQUHAwEwDAYDVR0TAQH/BAIwADAfBgNVHSMEGDAWgBSy3xzAKuvmrfpX' - ' logger.go:42: 07:00:35 | watcher-tls-certs-change/2-change-public-svc-certificate | EgmGJVm/ieLg8TA2BgNVHREBAf8ELDAqgih3YXRjaGVyLXB1YmxpYy53YXRjaGVy' - ' logger.go:42: 07:00:35 | watcher-tls-certs-change/2-change-public-svc-certificate | LWt1dHRsLWRlZmF1bHQuc3ZjMA0GCSqGSIb3DQEBDAUAA4IBgQBmitaOVryjB1Fy' - ' logger.go:42: 07:00:35 | watcher-tls-certs-change/2-change-public-svc-certificate | o+uQ7q1CfFEXu4uk57VffzLi6GFAot2u9gqfKm5Bx/JFKe0HvUHxfyum8oGhg8Mm' - ' logger.go:42: 07:00:35 | watcher-tls-certs-change/2-change-public-svc-certificate | lF7bosLeepyWZeFYbXLZ3eCDrTyQRPPc8j6O8R9Ygg2wvIhwIrADK1/KTQuPVFW5' - ' logger.go:42: 07:00:35 | watcher-tls-certs-change/2-change-public-svc-certificate | DVaC7MERErbmczp6DYiELdgcqlJimK3Vp/11qNtk1Kc7wVLI2Y36CzhwVMXOBW00' - ' logger.go:42: 07:00:35 | watcher-tls-certs-change/2-change-public-svc-certificate | XXQxvYoWLH8WjhVMN9L0ZfTLdQIwL4gt0B3R7Uh8HR/qMCwW4RhIByG/el6wH1yh' - ' logger.go:42: 07:00:35 | watcher-tls-certs-change/2-change-public-svc-certificate | D5gkY2Jkbf1vvmmw6MBLkx5O8aL145sW8fDKMGrmRj+eO5ALLeYMdmQZXqwZ4URw' - ' logger.go:42: 07:00:35 | watcher-tls-certs-change/2-change-public-svc-certificate | X1tyjzN1ntrcBJf6xnuMLfT1v9RPMCT4CPeb23M5lqvwQXdrbiarkLQ9n/mBPUA6' - ' logger.go:42: 07:00:35 | watcher-tls-certs-change/2-change-public-svc-certificate | lUYWBkFqYg9XnAgkWTeehQ7+AeOtvhj6gu3st6nEkOjJs9ri3Jris8kGL9Ay5yMy' - ' logger.go:42: 07:00:35 | watcher-tls-certs-change/2-change-public-svc-certificate | af6ByIVzqJuVxr9GJiFIvhx+t2NEelQUiFnN1465Q67x/AELrfo=' - ' logger.go:42: 07:00:35 | watcher-tls-certs-change/2-change-public-svc-certificate | -----END CERTIFICATE-----'' ''!='' ''-----BEGIN CERTIFICATE-----' - ' logger.go:42: 07:00:35 | watcher-tls-certs-change/2-change-public-svc-certificate | MIID6jCCAlKgAwIBAgIQaET3jSKb03pJAQEOx1sP8jANBgkqhkiG9w0BAQwFADAY' - ' logger.go:42: 07:00:35 | watcher-tls-certs-change/2-change-public-svc-certificate | MRYwFAYDVQQDEw1yb290Y2EtcHVibGljMB4XDTI2MDEyMjA3MDAzNVoXDTMxMDEy' - ' logger.go:42: 07:00:35 | watcher-tls-certs-change/2-change-public-svc-certificate | MTA3MDAzNVowADCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAMUO91uC' - ' logger.go:42: 07:00:35 | watcher-tls-certs-change/2-change-public-svc-certificate | WtApJP220eftPqzZX3iJmtM6v4caj75kfMa8Re/FN4CY0xX684UKfFkl5wGSnbEA' - ' logger.go:42: 07:00:35 | watcher-tls-certs-change/2-change-public-svc-certificate | nB6ivGgBkvlfLhUaru5eP2XnPTnbN54YF5mW2xWMO5nGZ4k07Q/FWqSeLhZSf6yY' - ' logger.go:42: 07:00:35 | watcher-tls-certs-change/2-change-public-svc-certificate | c7Ifn/QckjDTU4J0B3W5MOlSHZbyV4e5BcBloR2akGH2R74SQTx/v4BQbCRzeGGl' - ' logger.go:42: 07:00:35 | watcher-tls-certs-change/2-change-public-svc-certificate | adfMoWrCWUBRsSxhG/evB0XoWp0xeiOQoAndabr4auk1XzBqQyLwenOU+CivlIC7' - ' logger.go:42: 07:00:35 | watcher-tls-certs-change/2-change-public-svc-certificate | +81MamYtHDXZdqly0FQCjSx5l3BzAOQRsAgHbYp//gzb28KOdevORaT2xe/EQuda' - ' logger.go:42: 07:00:35 | watcher-tls-certs-change/2-change-public-svc-certificate | gqgHYmSQTwJ4EscCAwEAAaOBxzCBxDAOBgNVHQ8BAf8EBAMCBaAwEwYDVR0lBAww' - ' logger.go:42: 07:00:35 | watcher-tls-certs-change/2-change-public-svc-certificate | CgYIKwYBBQUHAwEwDAYDVR0TAQH/BAIwADAfBgNVHSMEGDAWgBSy3xzAKuvmrfpX' - ' logger.go:42: 07:00:35 | watcher-tls-certs-change/2-change-public-svc-certificate | EgmGJVm/ieLg8TBuBgNVHREBAf8EZDBigih3YXRjaGVyLXB1YmxpYy53YXRjaGVy' - ' logger.go:42: 07:00:35 | watcher-tls-certs-change/2-change-public-svc-certificate | LWt1dHRsLWRlZmF1bHQuc3ZjgjZ3YXRjaGVyLXB1YmxpYy53YXRjaGVyLWt1dHRs' - ' logger.go:42: 07:00:35 | watcher-tls-certs-change/2-change-public-svc-certificate | LWRlZmF1bHQuc3ZjLmNsdXN0ZXIubG9jYWwwDQYJKoZIhvcNAQEMBQADggGBACpS' - ' logger.go:42: 07:00:35 | watcher-tls-certs-change/2-change-public-svc-certificate | JbVH3n79slClC6DdMUQHLtnvKYoL33kgMsiwf3P6qByavtbQn8lPJyD1m/xsrRGs' - ' logger.go:42: 07:00:35 | watcher-tls-certs-change/2-change-public-svc-certificate | 3uBsjnZwn/rMyTOFYv9sxO/5PkOiTHmsCW0Q9jkMzI/WycjZCnAlMX9+LRj7Dl8e' - ' logger.go:42: 07:00:35 | watcher-tls-certs-change/2-change-public-svc-certificate | d6eyo3wukodG1taKDTVGqlmuv2mGd6Xf506cK2/mf/w556quUaYi9N2e4YY1BNly' - ' logger.go:42: 07:00:35 | watcher-tls-certs-change/2-change-public-svc-certificate | ynm3VjHEyuigghb7wbMzwo+FhmxyYBVYlekM7bjhGbxmBycVFc5WvkSdV8MqGksj' - ' logger.go:42: 07:00:35 | watcher-tls-certs-change/2-change-public-svc-certificate | tjF7CNFK+/KxETpnxRQCnAXEsTjjHnc2yc35qR3GdT3353LSejPl3HONjc+IfQCC' - ' logger.go:42: 07:00:35 | watcher-tls-certs-change/2-change-public-svc-certificate | grgIUD2qGizLxPAfdSCnaZZKjryCJZ0+CuAI1K8RsxvLn3DT7H8GQ74ZnA8X9l7e' - ' logger.go:42: 07:00:35 | watcher-tls-certs-change/2-change-public-svc-certificate | u37MJKf/IEW0RK82TTgxAMZpZKgCnz47NTqXlYPZUnvX85DxWuCfakyvObVyFIif' - ' logger.go:42: 07:00:35 | watcher-tls-certs-change/2-change-public-svc-certificate | OsRjhko6IWIMauEaPp2blS9MFbTOAMCM2NzpXGE9gyJxekasxuv5Ifq3TBk/oA==' - ' logger.go:42: 07:00:35 | watcher-tls-certs-change/2-change-public-svc-certificate | -----END CERTIFICATE-----'' '']''' - ' logger.go:42: 07:00:35 | watcher-tls-certs-change/2-change-public-svc-certificate | + exit 1' - ' logger.go:42: 07:00:36 | watcher-tls-certs-change/2-change-public-svc-certificate | running command: [sh -c set -euxo pipefail' - ' svc_cert=$(oc rsh -n $NAMESPACE -c watcher-api watcher-kuttl-api-0 cat /etc/pki/tls/certs/public.crt)' - ' secret_cert=$(oc get -n $NAMESPACE secret cert-watcher-public-svc -o jsonpath=''{.data.tls\.crt}'' | base64 --decode)' - ' # ensure that the svc secret and cert secret match' - ' if [ "${svc_cert}" != "${secret_cert}" ]; then' - ' exit 1' - ' fi' - ' ]' - ' logger.go:42: 07:00:36 | watcher-tls-certs-change/2-change-public-svc-certificate | ++ oc rsh -n watcher-kuttl-default -c watcher-api watcher-kuttl-api-0 cat /etc/pki/tls/certs/public.crt' - ' logger.go:42: 07:00:36 | watcher-tls-certs-change/2-change-public-svc-certificate | error: unable to upgrade connection: container not found ("watcher-api")' - ' logger.go:42: 07:00:36 | watcher-tls-certs-change/2-change-public-svc-certificate | + svc_cert=' - ' logger.go:42: 07:00:37 | watcher-tls-certs-change/2-change-public-svc-certificate | running command: [sh -c set -euxo pipefail' - ' svc_cert=$(oc rsh -n $NAMESPACE -c watcher-api watcher-kuttl-api-0 cat /etc/pki/tls/certs/public.crt)' - ' secret_cert=$(oc get -n $NAMESPACE secret cert-watcher-public-svc -o jsonpath=''{.data.tls\.crt}'' | base64 --decode)' - ' # ensure that the svc secret and cert secret match' - ' if [ "${svc_cert}" != "${secret_cert}" ]; then' - ' exit 1' - ' fi' - ' ]' - ' logger.go:42: 07:00:37 | watcher-tls-certs-change/2-change-public-svc-certificate | ++ oc rsh -n watcher-kuttl-default -c watcher-api watcher-kuttl-api-0 cat /etc/pki/tls/certs/public.crt' - ' logger.go:42: 07:00:38 | watcher-tls-certs-change/2-change-public-svc-certificate | error: unable to upgrade connection: container not found ("watcher-api")' - ' logger.go:42: 07:00:38 | watcher-tls-certs-change/2-change-public-svc-certificate | + svc_cert=' - ' logger.go:42: 07:00:39 | watcher-tls-certs-change/2-change-public-svc-certificate | running command: [sh -c set -euxo pipefail' - ' svc_cert=$(oc rsh -n $NAMESPACE -c watcher-api watcher-kuttl-api-0 cat /etc/pki/tls/certs/public.crt)' - ' secret_cert=$(oc get -n $NAMESPACE secret cert-watcher-public-svc -o jsonpath=''{.data.tls\.crt}'' | base64 --decode)' - ' # ensure that the svc secret and cert secret match' - ' if [ "${svc_cert}" != "${secret_cert}" ]; then' - ' exit 1' - ' fi' - ' ]' - ' logger.go:42: 07:00:39 | watcher-tls-certs-change/2-change-public-svc-certificate | ++ oc rsh -n watcher-kuttl-default -c watcher-api watcher-kuttl-api-0 cat /etc/pki/tls/certs/public.crt' - ' logger.go:42: 07:00:39 | watcher-tls-certs-change/2-change-public-svc-certificate | + svc_cert=''-----BEGIN CERTIFICATE-----' - ' logger.go:42: 07:00:39 | watcher-tls-certs-change/2-change-public-svc-certificate | MIID6jCCAlKgAwIBAgIQaET3jSKb03pJAQEOx1sP8jANBgkqhkiG9w0BAQwFADAY' - ' logger.go:42: 07:00:39 | watcher-tls-certs-change/2-change-public-svc-certificate | MRYwFAYDVQQDEw1yb290Y2EtcHVibGljMB4XDTI2MDEyMjA3MDAzNVoXDTMxMDEy' - ' logger.go:42: 07:00:39 | watcher-tls-certs-change/2-change-public-svc-certificate | MTA3MDAzNVowADCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAMUO91uC' - ' logger.go:42: 07:00:39 | watcher-tls-certs-change/2-change-public-svc-certificate | WtApJP220eftPqzZX3iJmtM6v4caj75kfMa8Re/FN4CY0xX684UKfFkl5wGSnbEA' - ' logger.go:42: 07:00:39 | watcher-tls-certs-change/2-change-public-svc-certificate | nB6ivGgBkvlfLhUaru5eP2XnPTnbN54YF5mW2xWMO5nGZ4k07Q/FWqSeLhZSf6yY' - ' logger.go:42: 07:00:39 | watcher-tls-certs-change/2-change-public-svc-certificate | c7Ifn/QckjDTU4J0B3W5MOlSHZbyV4e5BcBloR2akGH2R74SQTx/v4BQbCRzeGGl' - ' logger.go:42: 07:00:39 | watcher-tls-certs-change/2-change-public-svc-certificate | adfMoWrCWUBRsSxhG/evB0XoWp0xeiOQoAndabr4auk1XzBqQyLwenOU+CivlIC7' - ' logger.go:42: 07:00:39 | watcher-tls-certs-change/2-change-public-svc-certificate | +81MamYtHDXZdqly0FQCjSx5l3BzAOQRsAgHbYp//gzb28KOdevORaT2xe/EQuda' - ' logger.go:42: 07:00:39 | watcher-tls-certs-change/2-change-public-svc-certificate | gqgHYmSQTwJ4EscCAwEAAaOBxzCBxDAOBgNVHQ8BAf8EBAMCBaAwEwYDVR0lBAww' - ' logger.go:42: 07:00:39 | watcher-tls-certs-change/2-change-public-svc-certificate | CgYIKwYBBQUHAwEwDAYDVR0TAQH/BAIwADAfBgNVHSMEGDAWgBSy3xzAKuvmrfpX' - ' logger.go:42: 07:00:39 | watcher-tls-certs-change/2-change-public-svc-certificate | EgmGJVm/ieLg8TBuBgNVHREBAf8EZDBigih3YXRjaGVyLXB1YmxpYy53YXRjaGVy' - ' logger.go:42: 07:00:39 | watcher-tls-certs-change/2-change-public-svc-certificate | LWt1dHRsLWRlZmF1bHQuc3ZjgjZ3YXRjaGVyLXB1YmxpYy53YXRjaGVyLWt1dHRs' - ' logger.go:42: 07:00:39 | watcher-tls-certs-change/2-change-public-svc-certificate | LWRlZmF1bHQuc3ZjLmNsdXN0ZXIubG9jYWwwDQYJKoZIhvcNAQEMBQADggGBACpS' - ' logger.go:42: 07:00:39 | watcher-tls-certs-change/2-change-public-svc-certificate | JbVH3n79slClC6DdMUQHLtnvKYoL33kgMsiwf3P6qByavtbQn8lPJyD1m/xsrRGs' - ' logger.go:42: 07:00:39 | watcher-tls-certs-change/2-change-public-svc-certificate | 3uBsjnZwn/rMyTOFYv9sxO/5PkOiTHmsCW0Q9jkMzI/WycjZCnAlMX9+LRj7Dl8e' - ' logger.go:42: 07:00:39 | watcher-tls-certs-change/2-change-public-svc-certificate | d6eyo3wukodG1taKDTVGqlmuv2mGd6Xf506cK2/mf/w556quUaYi9N2e4YY1BNly' - ' logger.go:42: 07:00:39 | watcher-tls-certs-change/2-change-public-svc-certificate | ynm3VjHEyuigghb7wbMzwo+FhmxyYBVYlekM7bjhGbxmBycVFc5WvkSdV8MqGksj' - ' logger.go:42: 07:00:39 | watcher-tls-certs-change/2-change-public-svc-certificate | tjF7CNFK+/KxETpnxRQCnAXEsTjjHnc2yc35qR3GdT3353LSejPl3HONjc+IfQCC' - ' logger.go:42: 07:00:39 | watcher-tls-certs-change/2-change-public-svc-certificate | grgIUD2qGizLxPAfdSCnaZZKjryCJZ0+CuAI1K8RsxvLn3DT7H8GQ74ZnA8X9l7e' - ' logger.go:42: 07:00:39 | watcher-tls-certs-change/2-change-public-svc-certificate | u37MJKf/IEW0RK82TTgxAMZpZKgCnz47NTqXlYPZUnvX85DxWuCfakyvObVyFIif' - ' logger.go:42: 07:00:39 | watcher-tls-certs-change/2-change-public-svc-certificate | OsRjhko6IWIMauEaPp2blS9MFbTOAMCM2NzpXGE9gyJxekasxuv5Ifq3TBk/oA==' - ' logger.go:42: 07:00:39 | watcher-tls-certs-change/2-change-public-svc-certificate | -----END CERTIFICATE-----''' - ' logger.go:42: 07:00:39 | watcher-tls-certs-change/2-change-public-svc-certificate | ++ base64 --decode' - ' logger.go:42: 07:00:39 | watcher-tls-certs-change/2-change-public-svc-certificate | ++ oc get -n watcher-kuttl-default secret cert-watcher-public-svc -o ''jsonpath={.data.tls\.crt}''' - ' logger.go:42: 07:00:39 | watcher-tls-certs-change/2-change-public-svc-certificate | + secret_cert=''-----BEGIN CERTIFICATE-----' - ' logger.go:42: 07:00:39 | watcher-tls-certs-change/2-change-public-svc-certificate | MIID6jCCAlKgAwIBAgIQaET3jSKb03pJAQEOx1sP8jANBgkqhkiG9w0BAQwFADAY' - ' logger.go:42: 07:00:39 | watcher-tls-certs-change/2-change-public-svc-certificate | MRYwFAYDVQQDEw1yb290Y2EtcHVibGljMB4XDTI2MDEyMjA3MDAzNVoXDTMxMDEy' - ' logger.go:42: 07:00:39 | watcher-tls-certs-change/2-change-public-svc-certificate | MTA3MDAzNVowADCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAMUO91uC' - ' logger.go:42: 07:00:39 | watcher-tls-certs-change/2-change-public-svc-certificate | WtApJP220eftPqzZX3iJmtM6v4caj75kfMa8Re/FN4CY0xX684UKfFkl5wGSnbEA' - ' logger.go:42: 07:00:39 | watcher-tls-certs-change/2-change-public-svc-certificate | nB6ivGgBkvlfLhUaru5eP2XnPTnbN54YF5mW2xWMO5nGZ4k07Q/FWqSeLhZSf6yY' - ' logger.go:42: 07:00:39 | watcher-tls-certs-change/2-change-public-svc-certificate | c7Ifn/QckjDTU4J0B3W5MOlSHZbyV4e5BcBloR2akGH2R74SQTx/v4BQbCRzeGGl' - ' logger.go:42: 07:00:39 | watcher-tls-certs-change/2-change-public-svc-certificate | adfMoWrCWUBRsSxhG/evB0XoWp0xeiOQoAndabr4auk1XzBqQyLwenOU+CivlIC7' - ' logger.go:42: 07:00:39 | watcher-tls-certs-change/2-change-public-svc-certificate | +81MamYtHDXZdqly0FQCjSx5l3BzAOQRsAgHbYp//gzb28KOdevORaT2xe/EQuda' - ' logger.go:42: 07:00:39 | watcher-tls-certs-change/2-change-public-svc-certificate | gqgHYmSQTwJ4EscCAwEAAaOBxzCBxDAOBgNVHQ8BAf8EBAMCBaAwEwYDVR0lBAww' - ' logger.go:42: 07:00:39 | watcher-tls-certs-change/2-change-public-svc-certificate | CgYIKwYBBQUHAwEwDAYDVR0TAQH/BAIwADAfBgNVHSMEGDAWgBSy3xzAKuvmrfpX' - ' logger.go:42: 07:00:39 | watcher-tls-certs-change/2-change-public-svc-certificate | EgmGJVm/ieLg8TBuBgNVHREBAf8EZDBigih3YXRjaGVyLXB1YmxpYy53YXRjaGVy' - ' logger.go:42: 07:00:39 | watcher-tls-certs-change/2-change-public-svc-certificate | LWt1dHRsLWRlZmF1bHQuc3ZjgjZ3YXRjaGVyLXB1YmxpYy53YXRjaGVyLWt1dHRs' - ' logger.go:42: 07:00:39 | watcher-tls-certs-change/2-change-public-svc-certificate | LWRlZmF1bHQuc3ZjLmNsdXN0ZXIubG9jYWwwDQYJKoZIhvcNAQEMBQADggGBACpS' - ' logger.go:42: 07:00:39 | watcher-tls-certs-change/2-change-public-svc-certificate | JbVH3n79slClC6DdMUQHLtnvKYoL33kgMsiwf3P6qByavtbQn8lPJyD1m/xsrRGs' - ' logger.go:42: 07:00:39 | watcher-tls-certs-change/2-change-public-svc-certificate | 3uBsjnZwn/rMyTOFYv9sxO/5PkOiTHmsCW0Q9jkMzI/WycjZCnAlMX9+LRj7Dl8e' - ' logger.go:42: 07:00:39 | watcher-tls-certs-change/2-change-public-svc-certificate | d6eyo3wukodG1taKDTVGqlmuv2mGd6Xf506cK2/mf/w556quUaYi9N2e4YY1BNly' - ' logger.go:42: 07:00:39 | watcher-tls-certs-change/2-change-public-svc-certificate | ynm3VjHEyuigghb7wbMzwo+FhmxyYBVYlekM7bjhGbxmBycVFc5WvkSdV8MqGksj' - ' logger.go:42: 07:00:39 | watcher-tls-certs-change/2-change-public-svc-certificate | tjF7CNFK+/KxETpnxRQCnAXEsTjjHnc2yc35qR3GdT3353LSejPl3HONjc+IfQCC' - ' logger.go:42: 07:00:39 | watcher-tls-certs-change/2-change-public-svc-certificate | grgIUD2qGizLxPAfdSCnaZZKjryCJZ0+CuAI1K8RsxvLn3DT7H8GQ74ZnA8X9l7e' - ' logger.go:42: 07:00:39 | watcher-tls-certs-change/2-change-public-svc-certificate | u37MJKf/IEW0RK82TTgxAMZpZKgCnz47NTqXlYPZUnvX85DxWuCfakyvObVyFIif' - ' logger.go:42: 07:00:39 | watcher-tls-certs-change/2-change-public-svc-certificate | OsRjhko6IWIMauEaPp2blS9MFbTOAMCM2NzpXGE9gyJxekasxuv5Ifq3TBk/oA==' - ' logger.go:42: 07:00:39 | watcher-tls-certs-change/2-change-public-svc-certificate | -----END CERTIFICATE-----''' - ' logger.go:42: 07:00:39 | watcher-tls-certs-change/2-change-public-svc-certificate | + ''['' ''-----BEGIN CERTIFICATE-----' - ' logger.go:42: 07:00:39 | watcher-tls-certs-change/2-change-public-svc-certificate | MIID6jCCAlKgAwIBAgIQaET3jSKb03pJAQEOx1sP8jANBgkqhkiG9w0BAQwFADAY' - ' logger.go:42: 07:00:39 | watcher-tls-certs-change/2-change-public-svc-certificate | MRYwFAYDVQQDEw1yb290Y2EtcHVibGljMB4XDTI2MDEyMjA3MDAzNVoXDTMxMDEy' - ' logger.go:42: 07:00:39 | watcher-tls-certs-change/2-change-public-svc-certificate | MTA3MDAzNVowADCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAMUO91uC' - ' logger.go:42: 07:00:39 | watcher-tls-certs-change/2-change-public-svc-certificate | WtApJP220eftPqzZX3iJmtM6v4caj75kfMa8Re/FN4CY0xX684UKfFkl5wGSnbEA' - ' logger.go:42: 07:00:39 | watcher-tls-certs-change/2-change-public-svc-certificate | nB6ivGgBkvlfLhUaru5eP2XnPTnbN54YF5mW2xWMO5nGZ4k07Q/FWqSeLhZSf6yY' - ' logger.go:42: 07:00:39 | watcher-tls-certs-change/2-change-public-svc-certificate | c7Ifn/QckjDTU4J0B3W5MOlSHZbyV4e5BcBloR2akGH2R74SQTx/v4BQbCRzeGGl' - ' logger.go:42: 07:00:39 | watcher-tls-certs-change/2-change-public-svc-certificate | adfMoWrCWUBRsSxhG/evB0XoWp0xeiOQoAndabr4auk1XzBqQyLwenOU+CivlIC7' - ' logger.go:42: 07:00:39 | watcher-tls-certs-change/2-change-public-svc-certificate | +81MamYtHDXZdqly0FQCjSx5l3BzAOQRsAgHbYp//gzb28KOdevORaT2xe/EQuda' - ' logger.go:42: 07:00:39 | watcher-tls-certs-change/2-change-public-svc-certificate | gqgHYmSQTwJ4EscCAwEAAaOBxzCBxDAOBgNVHQ8BAf8EBAMCBaAwEwYDVR0lBAww' - ' logger.go:42: 07:00:39 | watcher-tls-certs-change/2-change-public-svc-certificate | CgYIKwYBBQUHAwEwDAYDVR0TAQH/BAIwADAfBgNVHSMEGDAWgBSy3xzAKuvmrfpX' - ' logger.go:42: 07:00:39 | watcher-tls-certs-change/2-change-public-svc-certificate | EgmGJVm/ieLg8TBuBgNVHREBAf8EZDBigih3YXRjaGVyLXB1YmxpYy53YXRjaGVy' - ' logger.go:42: 07:00:39 | watcher-tls-certs-change/2-change-public-svc-certificate | LWt1dHRsLWRlZmF1bHQuc3ZjgjZ3YXRjaGVyLXB1YmxpYy53YXRjaGVyLWt1dHRs' - ' logger.go:42: 07:00:39 | watcher-tls-certs-change/2-change-public-svc-certificate | LWRlZmF1bHQuc3ZjLmNsdXN0ZXIubG9jYWwwDQYJKoZIhvcNAQEMBQADggGBACpS' - ' logger.go:42: 07:00:39 | watcher-tls-certs-change/2-change-public-svc-certificate | JbVH3n79slClC6DdMUQHLtnvKYoL33kgMsiwf3P6qByavtbQn8lPJyD1m/xsrRGs' - ' logger.go:42: 07:00:39 | watcher-tls-certs-change/2-change-public-svc-certificate | 3uBsjnZwn/rMyTOFYv9sxO/5PkOiTHmsCW0Q9jkMzI/WycjZCnAlMX9+LRj7Dl8e' - ' logger.go:42: 07:00:39 | watcher-tls-certs-change/2-change-public-svc-certificate | d6eyo3wukodG1taKDTVGqlmuv2mGd6Xf506cK2/mf/w556quUaYi9N2e4YY1BNly' - ' logger.go:42: 07:00:39 | watcher-tls-certs-change/2-change-public-svc-certificate | ynm3VjHEyuigghb7wbMzwo+FhmxyYBVYlekM7bjhGbxmBycVFc5WvkSdV8MqGksj' - ' logger.go:42: 07:00:39 | watcher-tls-certs-change/2-change-public-svc-certificate | tjF7CNFK+/KxETpnxRQCnAXEsTjjHnc2yc35qR3GdT3353LSejPl3HONjc+IfQCC' - ' logger.go:42: 07:00:39 | watcher-tls-certs-change/2-change-public-svc-certificate | grgIUD2qGizLxPAfdSCnaZZKjryCJZ0+CuAI1K8RsxvLn3DT7H8GQ74ZnA8X9l7e' - ' logger.go:42: 07:00:39 | watcher-tls-certs-change/2-change-public-svc-certificate | u37MJKf/IEW0RK82TTgxAMZpZKgCnz47NTqXlYPZUnvX85DxWuCfakyvObVyFIif' - ' logger.go:42: 07:00:39 | watcher-tls-certs-change/2-change-public-svc-certificate | OsRjhko6IWIMauEaPp2blS9MFbTOAMCM2NzpXGE9gyJxekasxuv5Ifq3TBk/oA==' - ' logger.go:42: 07:00:39 | watcher-tls-certs-change/2-change-public-svc-certificate | -----END CERTIFICATE-----'' ''!='' ''-----BEGIN CERTIFICATE-----' - ' logger.go:42: 07:00:39 | watcher-tls-certs-change/2-change-public-svc-certificate | MIID6jCCAlKgAwIBAgIQaET3jSKb03pJAQEOx1sP8jANBgkqhkiG9w0BAQwFADAY' - ' logger.go:42: 07:00:39 | watcher-tls-certs-change/2-change-public-svc-certificate | MRYwFAYDVQQDEw1yb290Y2EtcHVibGljMB4XDTI2MDEyMjA3MDAzNVoXDTMxMDEy' - ' logger.go:42: 07:00:39 | watcher-tls-certs-change/2-change-public-svc-certificate | MTA3MDAzNVowADCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAMUO91uC' - ' logger.go:42: 07:00:39 | watcher-tls-certs-change/2-change-public-svc-certificate | WtApJP220eftPqzZX3iJmtM6v4caj75kfMa8Re/FN4CY0xX684UKfFkl5wGSnbEA' - ' logger.go:42: 07:00:39 | watcher-tls-certs-change/2-change-public-svc-certificate | nB6ivGgBkvlfLhUaru5eP2XnPTnbN54YF5mW2xWMO5nGZ4k07Q/FWqSeLhZSf6yY' - ' logger.go:42: 07:00:39 | watcher-tls-certs-change/2-change-public-svc-certificate | c7Ifn/QckjDTU4J0B3W5MOlSHZbyV4e5BcBloR2akGH2R74SQTx/v4BQbCRzeGGl' - ' logger.go:42: 07:00:39 | watcher-tls-certs-change/2-change-public-svc-certificate | adfMoWrCWUBRsSxhG/evB0XoWp0xeiOQoAndabr4auk1XzBqQyLwenOU+CivlIC7' - ' logger.go:42: 07:00:39 | watcher-tls-certs-change/2-change-public-svc-certificate | +81MamYtHDXZdqly0FQCjSx5l3BzAOQRsAgHbYp//gzb28KOdevORaT2xe/EQuda' - ' logger.go:42: 07:00:39 | watcher-tls-certs-change/2-change-public-svc-certificate | gqgHYmSQTwJ4EscCAwEAAaOBxzCBxDAOBgNVHQ8BAf8EBAMCBaAwEwYDVR0lBAww' - ' logger.go:42: 07:00:39 | watcher-tls-certs-change/2-change-public-svc-certificate | CgYIKwYBBQUHAwEwDAYDVR0TAQH/BAIwADAfBgNVHSMEGDAWgBSy3xzAKuvmrfpX' - ' logger.go:42: 07:00:39 | watcher-tls-certs-change/2-change-public-svc-certificate | EgmGJVm/ieLg8TBuBgNVHREBAf8EZDBigih3YXRjaGVyLXB1YmxpYy53YXRjaGVy' - ' logger.go:42: 07:00:39 | watcher-tls-certs-change/2-change-public-svc-certificate | LWt1dHRsLWRlZmF1bHQuc3ZjgjZ3YXRjaGVyLXB1YmxpYy53YXRjaGVyLWt1dHRs' - ' logger.go:42: 07:00:39 | watcher-tls-certs-change/2-change-public-svc-certificate | LWRlZmF1bHQuc3ZjLmNsdXN0ZXIubG9jYWwwDQYJKoZIhvcNAQEMBQADggGBACpS' - ' logger.go:42: 07:00:39 | watcher-tls-certs-change/2-change-public-svc-certificate | JbVH3n79slClC6DdMUQHLtnvKYoL33kgMsiwf3P6qByavtbQn8lPJyD1m/xsrRGs' - ' logger.go:42: 07:00:39 | watcher-tls-certs-change/2-change-public-svc-certificate | 3uBsjnZwn/rMyTOFYv9sxO/5PkOiTHmsCW0Q9jkMzI/WycjZCnAlMX9+LRj7Dl8e' - ' logger.go:42: 07:00:39 | watcher-tls-certs-change/2-change-public-svc-certificate | d6eyo3wukodG1taKDTVGqlmuv2mGd6Xf506cK2/mf/w556quUaYi9N2e4YY1BNly' - ' logger.go:42: 07:00:39 | watcher-tls-certs-change/2-change-public-svc-certificate | ynm3VjHEyuigghb7wbMzwo+FhmxyYBVYlekM7bjhGbxmBycVFc5WvkSdV8MqGksj' - ' logger.go:42: 07:00:39 | watcher-tls-certs-change/2-change-public-svc-certificate | tjF7CNFK+/KxETpnxRQCnAXEsTjjHnc2yc35qR3GdT3353LSejPl3HONjc+IfQCC' - ' logger.go:42: 07:00:39 | watcher-tls-certs-change/2-change-public-svc-certificate | grgIUD2qGizLxPAfdSCnaZZKjryCJZ0+CuAI1K8RsxvLn3DT7H8GQ74ZnA8X9l7e' - ' logger.go:42: 07:00:39 | watcher-tls-certs-change/2-change-public-svc-certificate | u37MJKf/IEW0RK82TTgxAMZpZKgCnz47NTqXlYPZUnvX85DxWuCfakyvObVyFIif' - ' logger.go:42: 07:00:39 | watcher-tls-certs-change/2-change-public-svc-certificate | OsRjhko6IWIMauEaPp2blS9MFbTOAMCM2NzpXGE9gyJxekasxuv5Ifq3TBk/oA==' - ' logger.go:42: 07:00:39 | watcher-tls-certs-change/2-change-public-svc-certificate | -----END CERTIFICATE-----'' '']''' - ' logger.go:42: 07:00:39 | watcher-tls-certs-change/2-change-public-svc-certificate | test step completed 2-change-public-svc-certificate' - ' logger.go:42: 07:00:39 | watcher-tls-certs-change/3-change-internal-svc-certificate | starting test step 3-change-internal-svc-certificate' - ' logger.go:42: 07:00:39 | watcher-tls-certs-change/3-change-internal-svc-certificate | running command: [sh -c oc patch Certificate -n $NAMESPACE watcher-internal-svc --type=''json'' -p=''[{"op": "replace", "path": "/spec/dnsNames", "value":[''watcher-internal.watcher-kuttl-default.svc'', ''watcher-internal.watcher-kuttl-default.svc.cluster.local'']}]''' - ' ]' - ' logger.go:42: 07:00:39 | watcher-tls-certs-change/3-change-internal-svc-certificate | certificate.cert-manager.io/watcher-internal-svc patched' - ' logger.go:42: 07:00:39 | watcher-tls-certs-change/3-change-internal-svc-certificate | running command: [sh -c set -euxo pipefail' - ' svc_cert=$(oc rsh -n $NAMESPACE -c watcher-api watcher-kuttl-api-0 cat /etc/pki/tls/certs/internal.crt)' - ' secret_cert=$(oc get -n $NAMESPACE secret cert-watcher-internal-svc -o jsonpath=''{.data.tls\.crt}'' | base64 --decode)' - ' # ensure that the svc secret and cert secret match' - ' if [ "${svc_cert}" != "${secret_cert}" ]; then' - ' exit 1' - ' fi' - ' ]' - ' logger.go:42: 07:00:39 | watcher-tls-certs-change/3-change-internal-svc-certificate | ++ oc rsh -n watcher-kuttl-default -c watcher-api watcher-kuttl-api-0 cat /etc/pki/tls/certs/internal.crt' - ' logger.go:42: 07:00:40 | watcher-tls-certs-change/3-change-internal-svc-certificate | + svc_cert=''-----BEGIN CERTIFICATE-----' - ' logger.go:42: 07:00:40 | watcher-tls-certs-change/3-change-internal-svc-certificate | MIIDtzCCAh+gAwIBAgIRAMdtA66/UVErgQ6jB2Mo7g8wDQYJKoZIhvcNAQEMBQAw' - ' logger.go:42: 07:00:40 | watcher-tls-certs-change/3-change-internal-svc-certificate | GjEYMBYGA1UEAxMPcm9vdGNhLWludGVybmFsMB4XDTI2MDEyMjA3MDAwN1oXDTMx' - ' logger.go:42: 07:00:40 | watcher-tls-certs-change/3-change-internal-svc-certificate | MDEyMTA3MDAwN1owADCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAOzY' - ' logger.go:42: 07:00:40 | watcher-tls-certs-change/3-change-internal-svc-certificate | hUVeRLlVB60sJbdpaQAJI2vpbeEEDa91MsWoYlWF58nvtb17k8KGvpPNemDF4UvD' - ' logger.go:42: 07:00:40 | watcher-tls-certs-change/3-change-internal-svc-certificate | kRm+oSlbUhWwlbwI1Q3VFz4JhvU12PvdUhV4e5PbNGULUDGpAbr04w098HEQSUj+' - ' logger.go:42: 07:00:40 | watcher-tls-certs-change/3-change-internal-svc-certificate | D/44rQFA0PbvLa//wISJcF6NG2kZiiAFAlrfYJH5tJM2nbdDOvHRY2FScNwLw3aE' - ' logger.go:42: 07:00:40 | watcher-tls-certs-change/3-change-internal-svc-certificate | 1UGIp9tmMA1on0l+9tBcndNzSq8klq5TdwDo8Pwrqz8m37ZXvwD1gizzDXwZhiPN' - ' logger.go:42: 07:00:40 | watcher-tls-certs-change/3-change-internal-svc-certificate | KIvLKp0hAxvswvj5Rib80zGQX6JjJEcOBFZ4GT/e8WUjrsJ6JsTygAtF9Y8xwchO' - ' logger.go:42: 07:00:40 | watcher-tls-certs-change/3-change-internal-svc-certificate | Ce4xdKeeGZ8hDK2ojFcCAwEAAaOBkTCBjjAOBgNVHQ8BAf8EBAMCBaAwEwYDVR0l' - ' logger.go:42: 07:00:40 | watcher-tls-certs-change/3-change-internal-svc-certificate | BAwwCgYIKwYBBQUHAwEwDAYDVR0TAQH/BAIwADAfBgNVHSMEGDAWgBSEvgxWYqXo' - ' logger.go:42: 07:00:40 | watcher-tls-certs-change/3-change-internal-svc-certificate | VaasRUQLJzZAsO/HJTA4BgNVHREBAf8ELjAsgip3YXRjaGVyLWludGVybmFsLndh' - ' logger.go:42: 07:00:40 | watcher-tls-certs-change/3-change-internal-svc-certificate | dGNoZXIta3V0dGwtZGVmYXVsdC5zdmMwDQYJKoZIhvcNAQEMBQADggGBAJzAl8kI' - ' logger.go:42: 07:00:40 | watcher-tls-certs-change/3-change-internal-svc-certificate | fRCmnFBnPIq9+3XfefG0pybCCtUusBuoATJYDHmH/379Rb4zeN6bqXg7PP36qVao' - ' logger.go:42: 07:00:40 | watcher-tls-certs-change/3-change-internal-svc-certificate | 98lAd5LFEieOBPJq2+QXo8CWrLcbItzbOHdFeVStfpuQqaD2OMYLLepP/IpAKT+0' - ' logger.go:42: 07:00:40 | watcher-tls-certs-change/3-change-internal-svc-certificate | +GEHqjJsaHF3xj6g5HcvQzhSaQh3Pg8ffKmqJKQg95PF1BPQ110MawKYh9MBibPr' - ' logger.go:42: 07:00:40 | watcher-tls-certs-change/3-change-internal-svc-certificate | b0myrALIel2CeIeWqm+VMrMI8en/gFxJWTfg58GcQbRiUDzWa2BHE1mEoXmOLNDt' - ' logger.go:42: 07:00:40 | watcher-tls-certs-change/3-change-internal-svc-certificate | L1iYWnFugdfAycs2+uAkPMbpav3Oi3WJX1N1iKRklNv5Cq8oQ++b8aoR2RZPnJJE' - ' logger.go:42: 07:00:40 | watcher-tls-certs-change/3-change-internal-svc-certificate | ppmkEBU4Kt94NO5B/A2PnjkM8+yOQjhjVnyD5Ccp8kHx0MN7H1OW9J0m4JEPjP/c' - ' logger.go:42: 07:00:40 | watcher-tls-certs-change/3-change-internal-svc-certificate | uRTxWb4yoJl6Igdfy3tYgldteKztKgf4tDlkR/Htsf8GVL+zbsnTseYY5BakyygP' - ' logger.go:42: 07:00:40 | watcher-tls-certs-change/3-change-internal-svc-certificate | AHnxS0D1QCA5Zv5craVd+a2ZCNvbnSdl4pwShL796A0cNZGnAKDD1LwcYQ==' - ' logger.go:42: 07:00:40 | watcher-tls-certs-change/3-change-internal-svc-certificate | -----END CERTIFICATE-----''' - ' logger.go:42: 07:00:40 | watcher-tls-certs-change/3-change-internal-svc-certificate | ++ oc get -n watcher-kuttl-default secret cert-watcher-internal-svc -o ''jsonpath={.data.tls\.crt}''' - ' logger.go:42: 07:00:40 | watcher-tls-certs-change/3-change-internal-svc-certificate | ++ base64 --decode' - ' logger.go:42: 07:00:40 | watcher-tls-certs-change/3-change-internal-svc-certificate | + secret_cert=''-----BEGIN CERTIFICATE-----' - ' logger.go:42: 07:00:40 | watcher-tls-certs-change/3-change-internal-svc-certificate | MIIDtzCCAh+gAwIBAgIRAMdtA66/UVErgQ6jB2Mo7g8wDQYJKoZIhvcNAQEMBQAw' - ' logger.go:42: 07:00:40 | watcher-tls-certs-change/3-change-internal-svc-certificate | GjEYMBYGA1UEAxMPcm9vdGNhLWludGVybmFsMB4XDTI2MDEyMjA3MDAwN1oXDTMx' - ' logger.go:42: 07:00:40 | watcher-tls-certs-change/3-change-internal-svc-certificate | MDEyMTA3MDAwN1owADCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAOzY' - ' logger.go:42: 07:00:40 | watcher-tls-certs-change/3-change-internal-svc-certificate | hUVeRLlVB60sJbdpaQAJI2vpbeEEDa91MsWoYlWF58nvtb17k8KGvpPNemDF4UvD' - ' logger.go:42: 07:00:40 | watcher-tls-certs-change/3-change-internal-svc-certificate | kRm+oSlbUhWwlbwI1Q3VFz4JhvU12PvdUhV4e5PbNGULUDGpAbr04w098HEQSUj+' - ' logger.go:42: 07:00:40 | watcher-tls-certs-change/3-change-internal-svc-certificate | D/44rQFA0PbvLa//wISJcF6NG2kZiiAFAlrfYJH5tJM2nbdDOvHRY2FScNwLw3aE' - ' logger.go:42: 07:00:40 | watcher-tls-certs-change/3-change-internal-svc-certificate | 1UGIp9tmMA1on0l+9tBcndNzSq8klq5TdwDo8Pwrqz8m37ZXvwD1gizzDXwZhiPN' - ' logger.go:42: 07:00:40 | watcher-tls-certs-change/3-change-internal-svc-certificate | KIvLKp0hAxvswvj5Rib80zGQX6JjJEcOBFZ4GT/e8WUjrsJ6JsTygAtF9Y8xwchO' - ' logger.go:42: 07:00:40 | watcher-tls-certs-change/3-change-internal-svc-certificate | Ce4xdKeeGZ8hDK2ojFcCAwEAAaOBkTCBjjAOBgNVHQ8BAf8EBAMCBaAwEwYDVR0l' - ' logger.go:42: 07:00:40 | watcher-tls-certs-change/3-change-internal-svc-certificate | BAwwCgYIKwYBBQUHAwEwDAYDVR0TAQH/BAIwADAfBgNVHSMEGDAWgBSEvgxWYqXo' - ' logger.go:42: 07:00:40 | watcher-tls-certs-change/3-change-internal-svc-certificate | VaasRUQLJzZAsO/HJTA4BgNVHREBAf8ELjAsgip3YXRjaGVyLWludGVybmFsLndh' - ' logger.go:42: 07:00:40 | watcher-tls-certs-change/3-change-internal-svc-certificate | dGNoZXIta3V0dGwtZGVmYXVsdC5zdmMwDQYJKoZIhvcNAQEMBQADggGBAJzAl8kI' - ' logger.go:42: 07:00:40 | watcher-tls-certs-change/3-change-internal-svc-certificate | fRCmnFBnPIq9+3XfefG0pybCCtUusBuoATJYDHmH/379Rb4zeN6bqXg7PP36qVao' - ' logger.go:42: 07:00:40 | watcher-tls-certs-change/3-change-internal-svc-certificate | 98lAd5LFEieOBPJq2+QXo8CWrLcbItzbOHdFeVStfpuQqaD2OMYLLepP/IpAKT+0' - ' logger.go:42: 07:00:40 | watcher-tls-certs-change/3-change-internal-svc-certificate | +GEHqjJsaHF3xj6g5HcvQzhSaQh3Pg8ffKmqJKQg95PF1BPQ110MawKYh9MBibPr' - ' logger.go:42: 07:00:40 | watcher-tls-certs-change/3-change-internal-svc-certificate | b0myrALIel2CeIeWqm+VMrMI8en/gFxJWTfg58GcQbRiUDzWa2BHE1mEoXmOLNDt' - ' logger.go:42: 07:00:40 | watcher-tls-certs-change/3-change-internal-svc-certificate | L1iYWnFugdfAycs2+uAkPMbpav3Oi3WJX1N1iKRklNv5Cq8oQ++b8aoR2RZPnJJE' - ' logger.go:42: 07:00:40 | watcher-tls-certs-change/3-change-internal-svc-certificate | ppmkEBU4Kt94NO5B/A2PnjkM8+yOQjhjVnyD5Ccp8kHx0MN7H1OW9J0m4JEPjP/c' - ' logger.go:42: 07:00:40 | watcher-tls-certs-change/3-change-internal-svc-certificate | uRTxWb4yoJl6Igdfy3tYgldteKztKgf4tDlkR/Htsf8GVL+zbsnTseYY5BakyygP' - ' logger.go:42: 07:00:40 | watcher-tls-certs-change/3-change-internal-svc-certificate | AHnxS0D1QCA5Zv5craVd+a2ZCNvbnSdl4pwShL796A0cNZGnAKDD1LwcYQ==' - ' logger.go:42: 07:00:40 | watcher-tls-certs-change/3-change-internal-svc-certificate | -----END CERTIFICATE-----''' - ' logger.go:42: 07:00:40 | watcher-tls-certs-change/3-change-internal-svc-certificate | + ''['' ''-----BEGIN CERTIFICATE-----' - ' logger.go:42: 07:00:40 | watcher-tls-certs-change/3-change-internal-svc-certificate | MIIDtzCCAh+gAwIBAgIRAMdtA66/UVErgQ6jB2Mo7g8wDQYJKoZIhvcNAQEMBQAw' - ' logger.go:42: 07:00:40 | watcher-tls-certs-change/3-change-internal-svc-certificate | GjEYMBYGA1UEAxMPcm9vdGNhLWludGVybmFsMB4XDTI2MDEyMjA3MDAwN1oXDTMx' - ' logger.go:42: 07:00:40 | watcher-tls-certs-change/3-change-internal-svc-certificate | MDEyMTA3MDAwN1owADCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAOzY' - ' logger.go:42: 07:00:40 | watcher-tls-certs-change/3-change-internal-svc-certificate | hUVeRLlVB60sJbdpaQAJI2vpbeEEDa91MsWoYlWF58nvtb17k8KGvpPNemDF4UvD' - ' logger.go:42: 07:00:40 | watcher-tls-certs-change/3-change-internal-svc-certificate | kRm+oSlbUhWwlbwI1Q3VFz4JhvU12PvdUhV4e5PbNGULUDGpAbr04w098HEQSUj+' - ' logger.go:42: 07:00:40 | watcher-tls-certs-change/3-change-internal-svc-certificate | D/44rQFA0PbvLa//wISJcF6NG2kZiiAFAlrfYJH5tJM2nbdDOvHRY2FScNwLw3aE' - ' logger.go:42: 07:00:40 | watcher-tls-certs-change/3-change-internal-svc-certificate | 1UGIp9tmMA1on0l+9tBcndNzSq8klq5TdwDo8Pwrqz8m37ZXvwD1gizzDXwZhiPN' - ' logger.go:42: 07:00:40 | watcher-tls-certs-change/3-change-internal-svc-certificate | KIvLKp0hAxvswvj5Rib80zGQX6JjJEcOBFZ4GT/e8WUjrsJ6JsTygAtF9Y8xwchO' - ' logger.go:42: 07:00:40 | watcher-tls-certs-change/3-change-internal-svc-certificate | Ce4xdKeeGZ8hDK2ojFcCAwEAAaOBkTCBjjAOBgNVHQ8BAf8EBAMCBaAwEwYDVR0l' - ' logger.go:42: 07:00:40 | watcher-tls-certs-change/3-change-internal-svc-certificate | BAwwCgYIKwYBBQUHAwEwDAYDVR0TAQH/BAIwADAfBgNVHSMEGDAWgBSEvgxWYqXo' - ' logger.go:42: 07:00:40 | watcher-tls-certs-change/3-change-internal-svc-certificate | VaasRUQLJzZAsO/HJTA4BgNVHREBAf8ELjAsgip3YXRjaGVyLWludGVybmFsLndh' - ' logger.go:42: 07:00:40 | watcher-tls-certs-change/3-change-internal-svc-certificate | dGNoZXIta3V0dGwtZGVmYXVsdC5zdmMwDQYJKoZIhvcNAQEMBQADggGBAJzAl8kI' - ' logger.go:42: 07:00:40 | watcher-tls-certs-change/3-change-internal-svc-certificate | fRCmnFBnPIq9+3XfefG0pybCCtUusBuoATJYDHmH/379Rb4zeN6bqXg7PP36qVao' - ' logger.go:42: 07:00:40 | watcher-tls-certs-change/3-change-internal-svc-certificate | 98lAd5LFEieOBPJq2+QXo8CWrLcbItzbOHdFeVStfpuQqaD2OMYLLepP/IpAKT+0' - ' logger.go:42: 07:00:40 | watcher-tls-certs-change/3-change-internal-svc-certificate | +GEHqjJsaHF3xj6g5HcvQzhSaQh3Pg8ffKmqJKQg95PF1BPQ110MawKYh9MBibPr' - ' logger.go:42: 07:00:40 | watcher-tls-certs-change/3-change-internal-svc-certificate | b0myrALIel2CeIeWqm+VMrMI8en/gFxJWTfg58GcQbRiUDzWa2BHE1mEoXmOLNDt' - ' logger.go:42: 07:00:40 | watcher-tls-certs-change/3-change-internal-svc-certificate | L1iYWnFugdfAycs2+uAkPMbpav3Oi3WJX1N1iKRklNv5Cq8oQ++b8aoR2RZPnJJE' - ' logger.go:42: 07:00:40 | watcher-tls-certs-change/3-change-internal-svc-certificate | ppmkEBU4Kt94NO5B/A2PnjkM8+yOQjhjVnyD5Ccp8kHx0MN7H1OW9J0m4JEPjP/c' - ' logger.go:42: 07:00:40 | watcher-tls-certs-change/3-change-internal-svc-certificate | uRTxWb4yoJl6Igdfy3tYgldteKztKgf4tDlkR/Htsf8GVL+zbsnTseYY5BakyygP' - ' logger.go:42: 07:00:40 | watcher-tls-certs-change/3-change-internal-svc-certificate | AHnxS0D1QCA5Zv5craVd+a2ZCNvbnSdl4pwShL796A0cNZGnAKDD1LwcYQ==' - ' logger.go:42: 07:00:40 | watcher-tls-certs-change/3-change-internal-svc-certificate | -----END CERTIFICATE-----'' ''!='' ''-----BEGIN CERTIFICATE-----' - ' logger.go:42: 07:00:40 | watcher-tls-certs-change/3-change-internal-svc-certificate | MIIDtzCCAh+gAwIBAgIRAMdtA66/UVErgQ6jB2Mo7g8wDQYJKoZIhvcNAQEMBQAw' - ' logger.go:42: 07:00:40 | watcher-tls-certs-change/3-change-internal-svc-certificate | GjEYMBYGA1UEAxMPcm9vdGNhLWludGVybmFsMB4XDTI2MDEyMjA3MDAwN1oXDTMx' - ' logger.go:42: 07:00:40 | watcher-tls-certs-change/3-change-internal-svc-certificate | MDEyMTA3MDAwN1owADCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAOzY' - ' logger.go:42: 07:00:40 | watcher-tls-certs-change/3-change-internal-svc-certificate | hUVeRLlVB60sJbdpaQAJI2vpbeEEDa91MsWoYlWF58nvtb17k8KGvpPNemDF4UvD' - ' logger.go:42: 07:00:40 | watcher-tls-certs-change/3-change-internal-svc-certificate | kRm+oSlbUhWwlbwI1Q3VFz4JhvU12PvdUhV4e5PbNGULUDGpAbr04w098HEQSUj+' - ' logger.go:42: 07:00:40 | watcher-tls-certs-change/3-change-internal-svc-certificate | D/44rQFA0PbvLa//wISJcF6NG2kZiiAFAlrfYJH5tJM2nbdDOvHRY2FScNwLw3aE' - ' logger.go:42: 07:00:40 | watcher-tls-certs-change/3-change-internal-svc-certificate | 1UGIp9tmMA1on0l+9tBcndNzSq8klq5TdwDo8Pwrqz8m37ZXvwD1gizzDXwZhiPN' - ' logger.go:42: 07:00:40 | watcher-tls-certs-change/3-change-internal-svc-certificate | KIvLKp0hAxvswvj5Rib80zGQX6JjJEcOBFZ4GT/e8WUjrsJ6JsTygAtF9Y8xwchO' - ' logger.go:42: 07:00:40 | watcher-tls-certs-change/3-change-internal-svc-certificate | Ce4xdKeeGZ8hDK2ojFcCAwEAAaOBkTCBjjAOBgNVHQ8BAf8EBAMCBaAwEwYDVR0l' - ' logger.go:42: 07:00:40 | watcher-tls-certs-change/3-change-internal-svc-certificate | BAwwCgYIKwYBBQUHAwEwDAYDVR0TAQH/BAIwADAfBgNVHSMEGDAWgBSEvgxWYqXo' - ' logger.go:42: 07:00:40 | watcher-tls-certs-change/3-change-internal-svc-certificate | VaasRUQLJzZAsO/HJTA4BgNVHREBAf8ELjAsgip3YXRjaGVyLWludGVybmFsLndh' - ' logger.go:42: 07:00:40 | watcher-tls-certs-change/3-change-internal-svc-certificate | dGNoZXIta3V0dGwtZGVmYXVsdC5zdmMwDQYJKoZIhvcNAQEMBQADggGBAJzAl8kI' - ' logger.go:42: 07:00:40 | watcher-tls-certs-change/3-change-internal-svc-certificate | fRCmnFBnPIq9+3XfefG0pybCCtUusBuoATJYDHmH/379Rb4zeN6bqXg7PP36qVao' - ' logger.go:42: 07:00:40 | watcher-tls-certs-change/3-change-internal-svc-certificate | 98lAd5LFEieOBPJq2+QXo8CWrLcbItzbOHdFeVStfpuQqaD2OMYLLepP/IpAKT+0' - ' logger.go:42: 07:00:40 | watcher-tls-certs-change/3-change-internal-svc-certificate | +GEHqjJsaHF3xj6g5HcvQzhSaQh3Pg8ffKmqJKQg95PF1BPQ110MawKYh9MBibPr' - ' logger.go:42: 07:00:40 | watcher-tls-certs-change/3-change-internal-svc-certificate | b0myrALIel2CeIeWqm+VMrMI8en/gFxJWTfg58GcQbRiUDzWa2BHE1mEoXmOLNDt' - ' logger.go:42: 07:00:40 | watcher-tls-certs-change/3-change-internal-svc-certificate | L1iYWnFugdfAycs2+uAkPMbpav3Oi3WJX1N1iKRklNv5Cq8oQ++b8aoR2RZPnJJE' - ' logger.go:42: 07:00:40 | watcher-tls-certs-change/3-change-internal-svc-certificate | ppmkEBU4Kt94NO5B/A2PnjkM8+yOQjhjVnyD5Ccp8kHx0MN7H1OW9J0m4JEPjP/c' - ' logger.go:42: 07:00:40 | watcher-tls-certs-change/3-change-internal-svc-certificate | uRTxWb4yoJl6Igdfy3tYgldteKztKgf4tDlkR/Htsf8GVL+zbsnTseYY5BakyygP' - ' logger.go:42: 07:00:40 | watcher-tls-certs-change/3-change-internal-svc-certificate | AHnxS0D1QCA5Zv5craVd+a2ZCNvbnSdl4pwShL796A0cNZGnAKDD1LwcYQ==' - ' logger.go:42: 07:00:40 | watcher-tls-certs-change/3-change-internal-svc-certificate | -----END CERTIFICATE-----'' '']''' - ' logger.go:42: 07:00:40 | watcher-tls-certs-change/3-change-internal-svc-certificate | test step completed 3-change-internal-svc-certificate' - ' logger.go:42: 07:00:40 | watcher-tls-certs-change/4-cleanup-watcher | starting test step 4-cleanup-watcher' - ' logger.go:42: 07:00:50 | watcher-tls-certs-change/4-cleanup-watcher | test step completed 4-cleanup-watcher' - ' logger.go:42: 07:00:50 | watcher-tls-certs-change/5-clenaup-certs | starting test step 5-clenaup-certs' - ' logger.go:42: 07:00:50 | watcher-tls-certs-change/5-clenaup-certs | test step completed 5-clenaup-certs' - ' logger.go:42: 07:00:50 | watcher-tls-certs-change | skipping kubernetes event logging' - === CONT kuttl/harness/watcher-tls - ' logger.go:42: 07:00:50 | watcher-tls | Skipping creation of user-supplied namespace: watcher-kuttl-default' - ' logger.go:42: 07:00:50 | watcher-tls/0-cleanup-watcher | starting test step 0-cleanup-watcher' - ' logger.go:42: 07:00:50 | watcher-tls/0-cleanup-watcher | test step completed 0-cleanup-watcher' - ' logger.go:42: 07:00:50 | watcher-tls/1-deploy-with-tlse | starting test step 1-deploy-with-tlse' - ' logger.go:42: 07:00:50 | watcher-tls/1-deploy-with-tlse | Certificate:watcher-kuttl-default/watcher-internal-svc created' - ' logger.go:42: 07:00:50 | watcher-tls/1-deploy-with-tlse | Certificate:watcher-kuttl-default/watcher-public-svc created' - ' logger.go:42: 07:00:50 | watcher-tls/1-deploy-with-tlse | Watcher:watcher-kuttl-default/watcher-kuttl created' - ' logger.go:42: 07:00:50 | watcher-tls/1-deploy-with-tlse | running command: [sh -c set -euxo pipefail' - ' oc exec -n ${NAMESPACE} openstackclient -- openstack service list -f value -c Name -c Type |[ $(grep -c ^watcher) == 1 ]' - ' SERVICEID=$(oc exec -n ${NAMESPACE} openstackclient -- openstack service list -f value -c Name -c Type -c ID | grep watcher| awk ''{print $1}'')' - ' [ $(oc get -n ${NAMESPACE} keystoneservice watcher -o jsonpath={.status.serviceID}) == ${SERVICEID} ]' - ' [ -n "$(oc get -n ${NAMESPACE} watcher watcher-kuttl -o jsonpath={.status.hash.dbsync})" ]' - ' [ "$(oc get -n ${NAMESPACE} secret watcher-kuttl-api-config-data -o jsonpath=''{.data.my\.cnf}''|base64 -d|grep -c ''ssl=1'')" == 1 ]' - ' [ "$(oc get -n ${NAMESPACE} secret watcher-kuttl-api-config-data -o jsonpath=''{.data.00-default\.conf}''|base64 -d|grep -c ''cafile = /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem'')" == 2 ]' - ' # check that both endpoints have https set' - ' oc exec -n ${NAMESPACE} openstackclient -- openstack endpoint list | grep infra-optim | [ $(grep -c https) == 2 ]' - ' # If we are running the container locally, skip following test' - ' if [ "$(oc get pods -n openstack-operators -o name -l openstack.org/operator-name=watcher)" == "" ]; then' - ' exit 0' - ' fi' - ' env_variables=$(oc set env $(oc get pods -n openstack-operators -o name -l openstack.org/operator-name=watcher) -n openstack-operators --list)' - ' counter=0' - ' for i in ${env_variables}; do' - ' if echo ${i} | grep ''_URL_DEFAULT'' &> /dev/null; then' - ' echo ${i}' - ' counter=$((counter + 1))' - ' fi' - ' done' - ' if [ ${counter} -lt 3 ]; then' - ' echo "Error: Less than 3 _URL_DEFAULT variables found."' - ' exit 1' - ' else' - ' echo "Success: ${counter} _URL_DEFAULT variables found."' - ' fi' - ' ]' - ' logger.go:42: 07:00:50 | watcher-tls/1-deploy-with-tlse | + oc exec -n watcher-kuttl-default openstackclient -- openstack service list -f value -c Name -c Type' - ' logger.go:42: 07:00:50 | watcher-tls/1-deploy-with-tlse | ++ grep -c ''^watcher''' - ' logger.go:42: 07:00:53 | watcher-tls/1-deploy-with-tlse | + ''['' 0 == 1 '']''' - ' logger.go:42: 07:00:55 | watcher-tls/1-deploy-with-tlse | running command: [sh -c set -euxo pipefail' - ' oc exec -n ${NAMESPACE} openstackclient -- openstack service list -f value -c Name -c Type |[ $(grep -c ^watcher) == 1 ]' - ' SERVICEID=$(oc exec -n ${NAMESPACE} openstackclient -- openstack service list -f value -c Name -c Type -c ID | grep watcher| awk ''{print $1}'')' - ' [ $(oc get -n ${NAMESPACE} keystoneservice watcher -o jsonpath={.status.serviceID}) == ${SERVICEID} ]' - ' [ -n "$(oc get -n ${NAMESPACE} watcher watcher-kuttl -o jsonpath={.status.hash.dbsync})" ]' - ' [ "$(oc get -n ${NAMESPACE} secret watcher-kuttl-api-config-data -o jsonpath=''{.data.my\.cnf}''|base64 -d|grep -c ''ssl=1'')" == 1 ]' - ' [ "$(oc get -n ${NAMESPACE} secret watcher-kuttl-api-config-data -o jsonpath=''{.data.00-default\.conf}''|base64 -d|grep -c ''cafile = /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem'')" == 2 ]' - ' # check that both endpoints have https set' - ' oc exec -n ${NAMESPACE} openstackclient -- openstack endpoint list | grep infra-optim | [ $(grep -c https) == 2 ]' - ' # If we are running the container locally, skip following test' - ' if [ "$(oc get pods -n openstack-operators -o name -l openstack.org/operator-name=watcher)" == "" ]; then' - ' exit 0' - ' fi' - ' env_variables=$(oc set env $(oc get pods -n openstack-operators -o name -l openstack.org/operator-name=watcher) -n openstack-operators --list)' - ' counter=0' - ' for i in ${env_variables}; do' - ' if echo ${i} | grep ''_URL_DEFAULT'' &> /dev/null; then' - ' echo ${i}' - ' counter=$((counter + 1))' - ' fi' - ' done' - ' if [ ${counter} -lt 3 ]; then' - ' echo "Error: Less than 3 _URL_DEFAULT variables found."' - ' exit 1' - ' else' - ' echo "Success: ${counter} _URL_DEFAULT variables found."' - ' fi' - ' ]' - ' logger.go:42: 07:00:55 | watcher-tls/1-deploy-with-tlse | + oc exec -n watcher-kuttl-default openstackclient -- openstack service list -f value -c Name -c Type' - ' logger.go:42: 07:00:55 | watcher-tls/1-deploy-with-tlse | ++ grep -c ''^watcher''' - ' logger.go:42: 07:00:57 | watcher-tls/1-deploy-with-tlse | + ''['' 0 == 1 '']''' - ' logger.go:42: 07:00:58 | watcher-tls/1-deploy-with-tlse | running command: [sh -c set -euxo pipefail' - ' oc exec -n ${NAMESPACE} openstackclient -- openstack service list -f value -c Name -c Type |[ $(grep -c ^watcher) == 1 ]' - ' SERVICEID=$(oc exec -n ${NAMESPACE} openstackclient -- openstack service list -f value -c Name -c Type -c ID | grep watcher| awk ''{print $1}'')' - ' [ $(oc get -n ${NAMESPACE} keystoneservice watcher -o jsonpath={.status.serviceID}) == ${SERVICEID} ]' - ' [ -n "$(oc get -n ${NAMESPACE} watcher watcher-kuttl -o jsonpath={.status.hash.dbsync})" ]' - ' [ "$(oc get -n ${NAMESPACE} secret watcher-kuttl-api-config-data -o jsonpath=''{.data.my\.cnf}''|base64 -d|grep -c ''ssl=1'')" == 1 ]' - ' [ "$(oc get -n ${NAMESPACE} secret watcher-kuttl-api-config-data -o jsonpath=''{.data.00-default\.conf}''|base64 -d|grep -c ''cafile = /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem'')" == 2 ]' - ' # check that both endpoints have https set' - ' oc exec -n ${NAMESPACE} openstackclient -- openstack endpoint list | grep infra-optim | [ $(grep -c https) == 2 ]' - ' # If we are running the container locally, skip following test' - ' if [ "$(oc get pods -n openstack-operators -o name -l openstack.org/operator-name=watcher)" == "" ]; then' - ' exit 0' - ' fi' - ' env_variables=$(oc set env $(oc get pods -n openstack-operators -o name -l openstack.org/operator-name=watcher) -n openstack-operators --list)' - ' counter=0' - ' for i in ${env_variables}; do' - ' if echo ${i} | grep ''_URL_DEFAULT'' &> /dev/null; then' - ' echo ${i}' - ' counter=$((counter + 1))' - ' fi' - ' done' - ' if [ ${counter} -lt 3 ]; then' - ' echo "Error: Less than 3 _URL_DEFAULT variables found."' - ' exit 1' - ' else' - ' echo "Success: ${counter} _URL_DEFAULT variables found."' - ' fi' - ' ]' - ' logger.go:42: 07:00:58 | watcher-tls/1-deploy-with-tlse | + oc exec -n watcher-kuttl-default openstackclient -- openstack service list -f value -c Name -c Type' - ' logger.go:42: 07:00:58 | watcher-tls/1-deploy-with-tlse | ++ grep -c ''^watcher''' - ' logger.go:42: 07:01:00 | watcher-tls/1-deploy-with-tlse | + ''['' 0 == 1 '']''' - ' logger.go:42: 07:01:01 | watcher-tls/1-deploy-with-tlse | running command: [sh -c set -euxo pipefail' - ' oc exec -n ${NAMESPACE} openstackclient -- openstack service list -f value -c Name -c Type |[ $(grep -c ^watcher) == 1 ]' - ' SERVICEID=$(oc exec -n ${NAMESPACE} openstackclient -- openstack service list -f value -c Name -c Type -c ID | grep watcher| awk ''{print $1}'')' - ' [ $(oc get -n ${NAMESPACE} keystoneservice watcher -o jsonpath={.status.serviceID}) == ${SERVICEID} ]' - ' [ -n "$(oc get -n ${NAMESPACE} watcher watcher-kuttl -o jsonpath={.status.hash.dbsync})" ]' - ' [ "$(oc get -n ${NAMESPACE} secret watcher-kuttl-api-config-data -o jsonpath=''{.data.my\.cnf}''|base64 -d|grep -c ''ssl=1'')" == 1 ]' - ' [ "$(oc get -n ${NAMESPACE} secret watcher-kuttl-api-config-data -o jsonpath=''{.data.00-default\.conf}''|base64 -d|grep -c ''cafile = /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem'')" == 2 ]' - ' # check that both endpoints have https set' - ' oc exec -n ${NAMESPACE} openstackclient -- openstack endpoint list | grep infra-optim | [ $(grep -c https) == 2 ]' - ' # If we are running the container locally, skip following test' - ' if [ "$(oc get pods -n openstack-operators -o name -l openstack.org/operator-name=watcher)" == "" ]; then' - ' exit 0' - ' fi' - ' env_variables=$(oc set env $(oc get pods -n openstack-operators -o name -l openstack.org/operator-name=watcher) -n openstack-operators --list)' - ' counter=0' - ' for i in ${env_variables}; do' - ' if echo ${i} | grep ''_URL_DEFAULT'' &> /dev/null; then' - ' echo ${i}' - ' counter=$((counter + 1))' - ' fi' - ' done' - ' if [ ${counter} -lt 3 ]; then' - ' echo "Error: Less than 3 _URL_DEFAULT variables found."' - ' exit 1' - ' else' - ' echo "Success: ${counter} _URL_DEFAULT variables found."' - ' fi' - ' ]' - ' logger.go:42: 07:01:01 | watcher-tls/1-deploy-with-tlse | + oc exec -n watcher-kuttl-default openstackclient -- openstack service list -f value -c Name -c Type' - ' logger.go:42: 07:01:01 | watcher-tls/1-deploy-with-tlse | ++ grep -c ''^watcher''' - ' logger.go:42: 07:01:05 | watcher-tls/1-deploy-with-tlse | + ''['' 1 == 1 '']''' - ' logger.go:42: 07:01:05 | watcher-tls/1-deploy-with-tlse | ++ oc exec -n watcher-kuttl-default openstackclient -- openstack service list -f value -c Name -c Type -c ID' - ' logger.go:42: 07:01:05 | watcher-tls/1-deploy-with-tlse | ++ awk ''{print $1}''' - ' logger.go:42: 07:01:05 | watcher-tls/1-deploy-with-tlse | ++ grep watcher' - ' logger.go:42: 07:01:07 | watcher-tls/1-deploy-with-tlse | + SERVICEID=95cae6c02c914be89de8a64359351912' - ' logger.go:42: 07:01:07 | watcher-tls/1-deploy-with-tlse | ++ oc get -n watcher-kuttl-default keystoneservice watcher -o ''jsonpath={.status.serviceID}''' - ' logger.go:42: 07:01:07 | watcher-tls/1-deploy-with-tlse | + ''['' 95cae6c02c914be89de8a64359351912 == 95cae6c02c914be89de8a64359351912 '']''' - ' logger.go:42: 07:01:07 | watcher-tls/1-deploy-with-tlse | ++ oc get -n watcher-kuttl-default watcher watcher-kuttl -o ''jsonpath={.status.hash.dbsync}''' - ' logger.go:42: 07:01:07 | watcher-tls/1-deploy-with-tlse | + ''['' -n '''' '']''' - ' logger.go:42: 07:01:09 | watcher-tls/1-deploy-with-tlse | running command: [sh -c set -euxo pipefail' - ' oc exec -n ${NAMESPACE} openstackclient -- openstack service list -f value -c Name -c Type |[ $(grep -c ^watcher) == 1 ]' - ' SERVICEID=$(oc exec -n ${NAMESPACE} openstackclient -- openstack service list -f value -c Name -c Type -c ID | grep watcher| awk ''{print $1}'')' - ' [ $(oc get -n ${NAMESPACE} keystoneservice watcher -o jsonpath={.status.serviceID}) == ${SERVICEID} ]' - ' [ -n "$(oc get -n ${NAMESPACE} watcher watcher-kuttl -o jsonpath={.status.hash.dbsync})" ]' - ' [ "$(oc get -n ${NAMESPACE} secret watcher-kuttl-api-config-data -o jsonpath=''{.data.my\.cnf}''|base64 -d|grep -c ''ssl=1'')" == 1 ]' - ' [ "$(oc get -n ${NAMESPACE} secret watcher-kuttl-api-config-data -o jsonpath=''{.data.00-default\.conf}''|base64 -d|grep -c ''cafile = /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem'')" == 2 ]' - ' # check that both endpoints have https set' - ' oc exec -n ${NAMESPACE} openstackclient -- openstack endpoint list | grep infra-optim | [ $(grep -c https) == 2 ]' - ' # If we are running the container locally, skip following test' - ' if [ "$(oc get pods -n openstack-operators -o name -l openstack.org/operator-name=watcher)" == "" ]; then' - ' exit 0' - ' fi' - ' env_variables=$(oc set env $(oc get pods -n openstack-operators -o name -l openstack.org/operator-name=watcher) -n openstack-operators --list)' - ' counter=0' - ' for i in ${env_variables}; do' - ' if echo ${i} | grep ''_URL_DEFAULT'' &> /dev/null; then' - ' echo ${i}' - ' counter=$((counter + 1))' - ' fi' - ' done' - ' if [ ${counter} -lt 3 ]; then' - ' echo "Error: Less than 3 _URL_DEFAULT variables found."' - ' exit 1' - ' else' - ' echo "Success: ${counter} _URL_DEFAULT variables found."' - ' fi' - ' ]' - ' logger.go:42: 07:01:09 | watcher-tls/1-deploy-with-tlse | + oc exec -n watcher-kuttl-default openstackclient -- openstack service list -f value -c Name -c Type' - ' logger.go:42: 07:01:09 | watcher-tls/1-deploy-with-tlse | ++ grep -c ''^watcher''' - ' logger.go:42: 07:01:11 | watcher-tls/1-deploy-with-tlse | + ''['' 1 == 1 '']''' - ' logger.go:42: 07:01:11 | watcher-tls/1-deploy-with-tlse | ++ oc exec -n watcher-kuttl-default openstackclient -- openstack service list -f value -c Name -c Type -c ID' - ' logger.go:42: 07:01:11 | watcher-tls/1-deploy-with-tlse | ++ awk ''{print $1}''' - ' logger.go:42: 07:01:11 | watcher-tls/1-deploy-with-tlse | ++ grep watcher' - ' logger.go:42: 07:01:14 | watcher-tls/1-deploy-with-tlse | + SERVICEID=95cae6c02c914be89de8a64359351912' - ' logger.go:42: 07:01:14 | watcher-tls/1-deploy-with-tlse | ++ oc get -n watcher-kuttl-default keystoneservice watcher -o ''jsonpath={.status.serviceID}''' - ' logger.go:42: 07:01:14 | watcher-tls/1-deploy-with-tlse | + ''['' 95cae6c02c914be89de8a64359351912 == 95cae6c02c914be89de8a64359351912 '']''' - ' logger.go:42: 07:01:14 | watcher-tls/1-deploy-with-tlse | ++ oc get -n watcher-kuttl-default watcher watcher-kuttl -o ''jsonpath={.status.hash.dbsync}''' - ' logger.go:42: 07:01:14 | watcher-tls/1-deploy-with-tlse | + ''['' -n nbbh56bh4h647h5fbh67dh699h5bdh85hb7h65fh676h555h9bh5c4hf9h684h55dh674h564h565h6fh578hd9h56bh5fchcch687h68fh568h688h69q '']''' - ' logger.go:42: 07:01:14 | watcher-tls/1-deploy-with-tlse | ++ oc get -n watcher-kuttl-default secret watcher-kuttl-api-config-data -o ''jsonpath={.data.my\.cnf}''' - ' logger.go:42: 07:01:14 | watcher-tls/1-deploy-with-tlse | ++ grep -c ssl=1' - ' logger.go:42: 07:01:14 | watcher-tls/1-deploy-with-tlse | ++ base64 -d' - ' logger.go:42: 07:01:14 | watcher-tls/1-deploy-with-tlse | + ''['' 1 == 1 '']''' - ' logger.go:42: 07:01:14 | watcher-tls/1-deploy-with-tlse | ++ oc get -n watcher-kuttl-default secret watcher-kuttl-api-config-data -o ''jsonpath={.data.00-default\.conf}''' - ' logger.go:42: 07:01:14 | watcher-tls/1-deploy-with-tlse | ++ grep -c ''cafile = /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem''' - ' logger.go:42: 07:01:14 | watcher-tls/1-deploy-with-tlse | ++ base64 -d' - ' logger.go:42: 07:01:14 | watcher-tls/1-deploy-with-tlse | + ''['' 2 == 2 '']''' - ' logger.go:42: 07:01:14 | watcher-tls/1-deploy-with-tlse | + oc exec -n watcher-kuttl-default openstackclient -- openstack endpoint list' - ' logger.go:42: 07:01:14 | watcher-tls/1-deploy-with-tlse | + grep infra-optim' - ' logger.go:42: 07:01:14 | watcher-tls/1-deploy-with-tlse | ++ grep -c https' - ' logger.go:42: 07:01:17 | watcher-tls/1-deploy-with-tlse | + ''['' 0 == 2 '']''' - ' logger.go:42: 07:01:18 | watcher-tls/1-deploy-with-tlse | running command: [sh -c set -euxo pipefail' - ' oc exec -n ${NAMESPACE} openstackclient -- openstack service list -f value -c Name -c Type |[ $(grep -c ^watcher) == 1 ]' - ' SERVICEID=$(oc exec -n ${NAMESPACE} openstackclient -- openstack service list -f value -c Name -c Type -c ID | grep watcher| awk ''{print $1}'')' - ' [ $(oc get -n ${NAMESPACE} keystoneservice watcher -o jsonpath={.status.serviceID}) == ${SERVICEID} ]' - ' [ -n "$(oc get -n ${NAMESPACE} watcher watcher-kuttl -o jsonpath={.status.hash.dbsync})" ]' - ' [ "$(oc get -n ${NAMESPACE} secret watcher-kuttl-api-config-data -o jsonpath=''{.data.my\.cnf}''|base64 -d|grep -c ''ssl=1'')" == 1 ]' - ' [ "$(oc get -n ${NAMESPACE} secret watcher-kuttl-api-config-data -o jsonpath=''{.data.00-default\.conf}''|base64 -d|grep -c ''cafile = /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem'')" == 2 ]' - ' # check that both endpoints have https set' - ' oc exec -n ${NAMESPACE} openstackclient -- openstack endpoint list | grep infra-optim | [ $(grep -c https) == 2 ]' - ' # If we are running the container locally, skip following test' - ' if [ "$(oc get pods -n openstack-operators -o name -l openstack.org/operator-name=watcher)" == "" ]; then' - ' exit 0' - ' fi' - ' env_variables=$(oc set env $(oc get pods -n openstack-operators -o name -l openstack.org/operator-name=watcher) -n openstack-operators --list)' - ' counter=0' - ' for i in ${env_variables}; do' - ' if echo ${i} | grep ''_URL_DEFAULT'' &> /dev/null; then' - ' echo ${i}' - ' counter=$((counter + 1))' - ' fi' - ' done' - ' if [ ${counter} -lt 3 ]; then' - ' echo "Error: Less than 3 _URL_DEFAULT variables found."' - ' exit 1' - ' else' - ' echo "Success: ${counter} _URL_DEFAULT variables found."' - ' fi' - ' ]' - ' logger.go:42: 07:01:18 | watcher-tls/1-deploy-with-tlse | + oc exec -n watcher-kuttl-default openstackclient -- openstack service list -f value -c Name -c Type' - ' logger.go:42: 07:01:18 | watcher-tls/1-deploy-with-tlse | ++ grep -c ''^watcher''' - ' logger.go:42: 07:01:20 | watcher-tls/1-deploy-with-tlse | + ''['' 1 == 1 '']''' - ' logger.go:42: 07:01:20 | watcher-tls/1-deploy-with-tlse | ++ grep watcher' - ' logger.go:42: 07:01:20 | watcher-tls/1-deploy-with-tlse | ++ oc exec -n watcher-kuttl-default openstackclient -- openstack service list -f value -c Name -c Type -c ID' - ' logger.go:42: 07:01:20 | watcher-tls/1-deploy-with-tlse | ++ awk ''{print $1}''' - ' logger.go:42: 07:01:24 | watcher-tls/1-deploy-with-tlse | + SERVICEID=95cae6c02c914be89de8a64359351912' - ' logger.go:42: 07:01:24 | watcher-tls/1-deploy-with-tlse | ++ oc get -n watcher-kuttl-default keystoneservice watcher -o ''jsonpath={.status.serviceID}''' - ' logger.go:42: 07:01:24 | watcher-tls/1-deploy-with-tlse | + ''['' 95cae6c02c914be89de8a64359351912 == 95cae6c02c914be89de8a64359351912 '']''' - ' logger.go:42: 07:01:24 | watcher-tls/1-deploy-with-tlse | ++ oc get -n watcher-kuttl-default watcher watcher-kuttl -o ''jsonpath={.status.hash.dbsync}''' - ' logger.go:42: 07:01:24 | watcher-tls/1-deploy-with-tlse | + ''['' -n nbbh56bh4h647h5fbh67dh699h5bdh85hb7h65fh676h555h9bh5c4hf9h684h55dh674h564h565h6fh578hd9h56bh5fchcch687h68fh568h688h69q '']''' - ' logger.go:42: 07:01:24 | watcher-tls/1-deploy-with-tlse | ++ oc get -n watcher-kuttl-default secret watcher-kuttl-api-config-data -o ''jsonpath={.data.my\.cnf}''' - ' logger.go:42: 07:01:24 | watcher-tls/1-deploy-with-tlse | ++ grep -c ssl=1' - ' logger.go:42: 07:01:24 | watcher-tls/1-deploy-with-tlse | ++ base64 -d' - ' logger.go:42: 07:01:24 | watcher-tls/1-deploy-with-tlse | + ''['' 1 == 1 '']''' - ' logger.go:42: 07:01:24 | watcher-tls/1-deploy-with-tlse | ++ oc get -n watcher-kuttl-default secret watcher-kuttl-api-config-data -o ''jsonpath={.data.00-default\.conf}''' - ' logger.go:42: 07:01:24 | watcher-tls/1-deploy-with-tlse | ++ grep -c ''cafile = /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem''' - ' logger.go:42: 07:01:24 | watcher-tls/1-deploy-with-tlse | ++ base64 -d' - ' logger.go:42: 07:01:24 | watcher-tls/1-deploy-with-tlse | + ''['' 2 == 2 '']''' - ' logger.go:42: 07:01:24 | watcher-tls/1-deploy-with-tlse | + grep infra-optim' - ' logger.go:42: 07:01:24 | watcher-tls/1-deploy-with-tlse | + oc exec -n watcher-kuttl-default openstackclient -- openstack endpoint list' - ' logger.go:42: 07:01:24 | watcher-tls/1-deploy-with-tlse | ++ grep -c https' - ' logger.go:42: 07:01:27 | watcher-tls/1-deploy-with-tlse | + ''['' 2 == 2 '']''' - ' logger.go:42: 07:01:27 | watcher-tls/1-deploy-with-tlse | ++ oc get pods -n openstack-operators -o name -l openstack.org/operator-name=watcher' - ' logger.go:42: 07:01:27 | watcher-tls/1-deploy-with-tlse | + ''['' '''' == '''' '']''' - ' logger.go:42: 07:01:27 | watcher-tls/1-deploy-with-tlse | + exit 0' - ' logger.go:42: 07:01:28 | watcher-tls/1-deploy-with-tlse | running command: [sh -c set -euxo pipefail' - ' oc exec -n ${NAMESPACE} openstackclient -- openstack service list -f value -c Name -c Type |[ $(grep -c ^watcher) == 1 ]' - ' SERVICEID=$(oc exec -n ${NAMESPACE} openstackclient -- openstack service list -f value -c Name -c Type -c ID | grep watcher| awk ''{print $1}'')' - ' [ $(oc get -n ${NAMESPACE} keystoneservice watcher -o jsonpath={.status.serviceID}) == ${SERVICEID} ]' - ' [ -n "$(oc get -n ${NAMESPACE} watcher watcher-kuttl -o jsonpath={.status.hash.dbsync})" ]' - ' [ "$(oc get -n ${NAMESPACE} secret watcher-kuttl-api-config-data -o jsonpath=''{.data.my\.cnf}''|base64 -d|grep -c ''ssl=1'')" == 1 ]' - ' [ "$(oc get -n ${NAMESPACE} secret watcher-kuttl-api-config-data -o jsonpath=''{.data.00-default\.conf}''|base64 -d|grep -c ''cafile = /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem'')" == 2 ]' - ' # check that both endpoints have https set' - ' oc exec -n ${NAMESPACE} openstackclient -- openstack endpoint list | grep infra-optim | [ $(grep -c https) == 2 ]' - ' # If we are running the container locally, skip following test' - ' if [ "$(oc get pods -n openstack-operators -o name -l openstack.org/operator-name=watcher)" == "" ]; then' - ' exit 0' - ' fi' - ' env_variables=$(oc set env $(oc get pods -n openstack-operators -o name -l openstack.org/operator-name=watcher) -n openstack-operators --list)' - ' counter=0' - ' for i in ${env_variables}; do' - ' if echo ${i} | grep ''_URL_DEFAULT'' &> /dev/null; then' - ' echo ${i}' - ' counter=$((counter + 1))' - ' fi' - ' done' - ' if [ ${counter} -lt 3 ]; then' - ' echo "Error: Less than 3 _URL_DEFAULT variables found."' - ' exit 1' - ' else' - ' echo "Success: ${counter} _URL_DEFAULT variables found."' - ' fi' - ' ]' - ' logger.go:42: 07:01:28 | watcher-tls/1-deploy-with-tlse | + oc exec -n watcher-kuttl-default openstackclient -- openstack service list -f value -c Name -c Type' - ' logger.go:42: 07:01:28 | watcher-tls/1-deploy-with-tlse | ++ grep -c ''^watcher''' - ' logger.go:42: 07:01:31 | watcher-tls/1-deploy-with-tlse | + ''['' 1 == 1 '']''' - ' logger.go:42: 07:01:31 | watcher-tls/1-deploy-with-tlse | ++ oc exec -n watcher-kuttl-default openstackclient -- openstack service list -f value -c Name -c Type -c ID' - ' logger.go:42: 07:01:31 | watcher-tls/1-deploy-with-tlse | ++ awk ''{print $1}''' - ' logger.go:42: 07:01:31 | watcher-tls/1-deploy-with-tlse | ++ grep watcher' - ' logger.go:42: 07:01:34 | watcher-tls/1-deploy-with-tlse | + SERVICEID=95cae6c02c914be89de8a64359351912' - ' logger.go:42: 07:01:34 | watcher-tls/1-deploy-with-tlse | ++ oc get -n watcher-kuttl-default keystoneservice watcher -o ''jsonpath={.status.serviceID}''' - ' logger.go:42: 07:01:34 | watcher-tls/1-deploy-with-tlse | + ''['' 95cae6c02c914be89de8a64359351912 == 95cae6c02c914be89de8a64359351912 '']''' - ' logger.go:42: 07:01:34 | watcher-tls/1-deploy-with-tlse | ++ oc get -n watcher-kuttl-default watcher watcher-kuttl -o ''jsonpath={.status.hash.dbsync}''' - ' logger.go:42: 07:01:34 | watcher-tls/1-deploy-with-tlse | + ''['' -n nbbh56bh4h647h5fbh67dh699h5bdh85hb7h65fh676h555h9bh5c4hf9h684h55dh674h564h565h6fh578hd9h56bh5fchcch687h68fh568h688h69q '']''' - ' logger.go:42: 07:01:34 | watcher-tls/1-deploy-with-tlse | ++ base64 -d' - ' logger.go:42: 07:01:34 | watcher-tls/1-deploy-with-tlse | ++ grep -c ssl=1' - ' logger.go:42: 07:01:34 | watcher-tls/1-deploy-with-tlse | ++ oc get -n watcher-kuttl-default secret watcher-kuttl-api-config-data -o ''jsonpath={.data.my\.cnf}''' - ' logger.go:42: 07:01:34 | watcher-tls/1-deploy-with-tlse | + ''['' 1 == 1 '']''' - ' logger.go:42: 07:01:34 | watcher-tls/1-deploy-with-tlse | ++ oc get -n watcher-kuttl-default secret watcher-kuttl-api-config-data -o ''jsonpath={.data.00-default\.conf}''' - ' logger.go:42: 07:01:34 | watcher-tls/1-deploy-with-tlse | ++ grep -c ''cafile = /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem''' - ' logger.go:42: 07:01:34 | watcher-tls/1-deploy-with-tlse | ++ base64 -d' - ' logger.go:42: 07:01:34 | watcher-tls/1-deploy-with-tlse | + ''['' 2 == 2 '']''' - ' logger.go:42: 07:01:34 | watcher-tls/1-deploy-with-tlse | + oc exec -n watcher-kuttl-default openstackclient -- openstack endpoint list' - ' logger.go:42: 07:01:34 | watcher-tls/1-deploy-with-tlse | + grep infra-optim' - ' logger.go:42: 07:01:34 | watcher-tls/1-deploy-with-tlse | ++ grep -c https' - ' logger.go:42: 07:01:37 | watcher-tls/1-deploy-with-tlse | + ''['' 2 == 2 '']''' - ' logger.go:42: 07:01:37 | watcher-tls/1-deploy-with-tlse | ++ oc get pods -n openstack-operators -o name -l openstack.org/operator-name=watcher' - ' logger.go:42: 07:01:37 | watcher-tls/1-deploy-with-tlse | + ''['' '''' == '''' '']''' - ' logger.go:42: 07:01:37 | watcher-tls/1-deploy-with-tlse | + exit 0' - ' logger.go:42: 07:01:37 | watcher-tls/1-deploy-with-tlse | test step completed 1-deploy-with-tlse' - ' logger.go:42: 07:01:37 | watcher-tls/2-patch-mtls | starting test step 2-patch-mtls' - ' logger.go:42: 07:01:37 | watcher-tls/2-patch-mtls | running command: [sh -c set -euxo pipefail' - ' oc patch oscp -n $NAMESPACE openstack --type=''json'' -p=''[{"op": "replace", "path": "/spec/memcached/templates/memcached/tls/mtls/sslVerifyMode", "value": "Request"}]''' - ' ]' - ' logger.go:42: 07:01:37 | watcher-tls/2-patch-mtls | + oc patch oscp -n watcher-kuttl-default openstack --type=json ''-p=[{"op": "replace", "path": "/spec/memcached/templates/memcached/tls/mtls/sslVerifyMode", "value": "Request"}]''' - ' logger.go:42: 07:01:37 | watcher-tls/2-patch-mtls | openstackcontrolplane.core.openstack.org/openstack patched' - ' logger.go:42: 07:01:37 | watcher-tls/2-patch-mtls | running command: [sh -c set -euxo pipefail' - ' ' - ' oc project ${NAMESPACE}' - ' # Get pod names for each watcher service' - ' APIPOD=$(oc get pods -l service=watcher-api -o jsonpath=''{.items[0].metadata.name}'')' - ' APPLIERPOD=$(oc get pods -l service=watcher-applier -o jsonpath=''{.items[0].metadata.name}'')' - ' DECISIONENGINEPOD=$(oc get pods -l service=watcher-decision-engine -o jsonpath=''{.items[0].metadata.name}'')' - ' ' - ' # Verify memcached mTLS config parameters in watcher-api config' - ' if [ -n "${APIPOD}" ]; then' - ' echo "Checking watcher-api config..."' - ' [ $(echo $(oc rsh -c watcher-api $APIPOD cat /etc/watcher/watcher.conf.d/00-default.conf) | grep -c "memcache_tls_certfile = /etc/pki/tls/certs/mtls.crt ") == 1 ]' - ' [ $(echo $(oc rsh -c watcher-api $APIPOD cat /etc/watcher/watcher.conf.d/00-default.conf) | grep -c "memcache_tls_keyfile = /etc/pki/tls/private/mtls.key") == 1 ]' - ' [ $(echo $(oc rsh -c watcher-api $APIPOD cat /etc/watcher/watcher.conf.d/00-default.conf) | grep -c "memcache_tls_cafile = /etc/pki/tls/certs/mtls-ca.crt") == 1 ]' - ' [ $(echo $(oc rsh -c watcher-api $APIPOD cat /etc/watcher/watcher.conf.d/00-default.conf) | grep -c "memcache_tls_enabled = true") == 1 ]' - ' ' - ' # Verify mTLS config parameters in memcached backend config' - ' [ $(echo $(oc rsh -c watcher-api $APIPOD cat /etc/watcher/watcher.conf.d/00-default.conf) | grep -c "tls_certfile=/etc/pki/tls/certs/mtls.crt") == 1 ]' - ' [ $(echo $(oc rsh -c watcher-api $APIPOD cat /etc/watcher/watcher.conf.d/00-default.conf) | grep -c "tls_keyfile=/etc/pki/tls/private/mtls.key") == 1 ]' - ' [ $(echo $(oc rsh -c watcher-api $APIPOD cat /etc/watcher/watcher.conf.d/00-default.conf) | grep -c "tls_cafile=/etc/pki/tls/certs/mtls-ca.crt") == 1 ]' - ' else' - ' exit 1' - ' fi' - ' ' - ' # Verify memcached mTLS config parameters in watcher-applier config' - ' if [ -n "${APPLIERPOD}" ]; then' - ' echo "Checking watcher-applier config..."' - ' [ $(echo $(oc rsh -c watcher-applier $APPLIERPOD cat /etc/watcher/watcher.conf.d/00-default.conf) | grep -c "memcache_tls_certfile = /etc/pki/tls/certs/mtls.crt ") == 1 ]' - ' [ $(echo $(oc rsh -c watcher-applier $APPLIERPOD cat /etc/watcher/watcher.conf.d/00-default.conf) | grep -c "memcache_tls_keyfile = /etc/pki/tls/private/mtls.key") == 1 ]' - ' [ $(echo $(oc rsh -c watcher-applier $APPLIERPOD cat /etc/watcher/watcher.conf.d/00-default.conf) | grep -c "memcache_tls_cafile = /etc/pki/tls/certs/mtls-ca.crt") == 1 ]' - ' [ $(echo $(oc rsh -c watcher-applier $APPLIERPOD cat /etc/watcher/watcher.conf.d/00-default.conf) | grep -c "memcache_tls_enabled = true") == 1 ]' - ' ' - ' [ $(echo $(oc rsh -c watcher-applier $APPLIERPOD cat /etc/watcher/watcher.conf.d/00-default.conf) | grep -c "tls_certfile=/etc/pki/tls/certs/mtls.crt") == 1 ]' - ' [ $(echo $(oc rsh -c watcher-applier $APPLIERPOD cat /etc/watcher/watcher.conf.d/00-default.conf) | grep -c "tls_keyfile=/etc/pki/tls/private/mtls.key") == 1 ]' - ' [ $(echo $(oc rsh -c watcher-applier $APPLIERPOD cat /etc/watcher/watcher.conf.d/00-default.conf) | grep -c "tls_cafile=/etc/pki/tls/certs/mtls-ca.crt") == 1 ]' - ' else' - ' exit 1' - ' fi' - ' ' - ' # Verify memcached mTLS config parameters in watcher-decision-engine config' - ' if [ -n "${DECISIONENGINEPOD}" ]; then' - ' echo "Checking watcher-decision-engine config..."' - ' [ $(echo $(oc rsh -c watcher-decision-engine $DECISIONENGINEPOD cat /etc/watcher/watcher.conf.d/00-default.conf) | grep -c "memcache_tls_certfile = /etc/pki/tls/certs/mtls.crt ") == 1 ]' - ' [ $(echo $(oc rsh -c watcher-decision-engine $DECISIONENGINEPOD cat /etc/watcher/watcher.conf.d/00-default.conf) | grep -c "memcache_tls_keyfile = /etc/pki/tls/private/mtls.key") == 1 ]' - ' [ $(echo $(oc rsh -c watcher-decision-engine $DECISIONENGINEPOD cat /etc/watcher/watcher.conf.d/00-default.conf) | grep -c "memcache_tls_cafile = /etc/pki/tls/certs/mtls-ca.crt") == 1 ]' - ' [ $(echo $(oc rsh -c watcher-decision-engine $DECISIONENGINEPOD cat /etc/watcher/watcher.conf.d/00-default.conf) | grep -c "memcache_tls_enabled = true") == 1 ]' - ' ' - ' [ $(echo $(oc rsh -c watcher-decision-engine $DECISIONENGINEPOD cat /etc/watcher/watcher.conf.d/00-default.conf) | grep -c "tls_certfile=/etc/pki/tls/certs/mtls.crt") == 1 ]' - ' [ $(echo $(oc rsh -c watcher-decision-engine $DECISIONENGINEPOD cat /etc/watcher/watcher.conf.d/00-default.conf) | grep -c "tls_keyfile=/etc/pki/tls/private/mtls.key") == 1 ]' - ' [ $(echo $(oc rsh -c watcher-decision-engine $DECISIONENGINEPOD cat /etc/watcher/watcher.conf.d/00-default.conf) | grep -c "tls_cafile=/etc/pki/tls/certs/mtls-ca.crt") == 1 ]' - ' else' - ' exit 1' - ' fi' - ' ]' - ' logger.go:42: 07:01:37 | watcher-tls/2-patch-mtls | + oc project watcher-kuttl-default' - ' logger.go:42: 07:01:37 | watcher-tls/2-patch-mtls | Already on project "watcher-kuttl-default" on server "https://api.crc.testing:6443".' - ' logger.go:42: 07:01:37 | watcher-tls/2-patch-mtls | ++ oc get pods -l service=watcher-api -o ''jsonpath={.items[0].metadata.name}''' - ' logger.go:42: 07:01:37 | watcher-tls/2-patch-mtls | + APIPOD=watcher-kuttl-api-0' - ' logger.go:42: 07:01:37 | watcher-tls/2-patch-mtls | ++ oc get pods -l service=watcher-applier -o ''jsonpath={.items[0].metadata.name}''' - ' logger.go:42: 07:01:38 | watcher-tls/2-patch-mtls | + APPLIERPOD=watcher-kuttl-applier-0' - ' logger.go:42: 07:01:38 | watcher-tls/2-patch-mtls | ++ oc get pods -l service=watcher-decision-engine -o ''jsonpath={.items[0].metadata.name}''' - ' logger.go:42: 07:01:38 | watcher-tls/2-patch-mtls | + DECISIONENGINEPOD=watcher-kuttl-decision-engine-0' - ' logger.go:42: 07:01:38 | watcher-tls/2-patch-mtls | + ''['' -n watcher-kuttl-api-0 '']''' - ' logger.go:42: 07:01:38 | watcher-tls/2-patch-mtls | + echo ''Checking watcher-api config...''' - ' logger.go:42: 07:01:38 | watcher-tls/2-patch-mtls | Checking watcher-api config...' - ' logger.go:42: 07:01:38 | watcher-tls/2-patch-mtls | ++ grep -c ''memcache_tls_certfile = /etc/pki/tls/certs/mtls.crt ''' - ' logger.go:42: 07:01:38 | watcher-tls/2-patch-mtls | +++ oc rsh -c watcher-api watcher-kuttl-api-0 cat /etc/watcher/watcher.conf.d/00-default.conf' - ' logger.go:42: 07:01:38 | watcher-tls/2-patch-mtls | ++ echo ''[DEFAULT]'' state_path = /var/lib/watcher transport_url = ''rabbit://**********=1'' control_exchange = watcher debug = True log_file = /var/log/watcher/watcher-kuttl-api.log ''#'' empty notification_level means that no notification will be sent notification_level = ''[database]'' connection = ''mysql+pymysql://watcher_95d6:0107df271c530ebe460a742c5441b891@openstack.watcher-kuttl-default.svc/watcher?read_default_file=/etc/my.cnf'' ''[oslo_policy]'' policy_file = /etc/watcher/policy.yaml.sample ''[oslo_messaging_notifications]'' driver = noop ''[oslo_messaging_rabbit]'' rabbit_quorum_queue=true rabbit_transient_quorum_queue=true amqp_durable_queues=true ''[keystone_authtoken]'' memcached_servers=memcached-0.memcached.watcher-kuttl-default.svc:11212 project_domain_name = Default project_name = service user_domain_name = Default password = pa**********rd username = watcher auth_url = https://keystone-internal.watcher-kuttl-default.svc:5000 interface = internal auth_type = password cafile = /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem ''[watcher_clients_auth]'' project_domain_name = Default project_name = service user_domain_name = Default password = pa**********rd username = watcher auth_url = https://keystone-internal.watcher-kuttl-default.svc:5000 interface = internal auth_type = password cafile = /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem ''[oslo_concurrency]'' lock_path = /var/lib/watcher/tmp ''[watcher_datasources]'' datasources = prometheus ''[cache]'' backend = oslo_cache.memcache_pool memcache_servers=memcached-0.memcached.watcher-kuttl-default.svc:11212 memcache_socket_timeout = 0.5 memcache_pool_connection_get_timeout = 1 enabled=true tls_enabled=true memcache_dead_retry = 30 ''[prometheus_client]'' host = metric-storage-prometheus.watcher-kuttl-default.svc port = 9090 cafile = /etc/pki/ca-trust/extracted/pem/prometheus/ca.crt ''[cinder_client]'' endpoint_type = internal ''[glance_client]'' endpoint_type = internal ''[ironic_client]'' endpoint_type = internal ''[keystone_client]'' interface = internal ''[neutron_client]'' endpoint_type = internal ''[nova_client]'' endpoint_type = internal ''[placement_client]'' interface = internal ''[watcher_cluster_data_model_collectors.compute]'' period = 900 ''[watcher_cluster_data_model_collectors.baremetal]'' period = 900 ''[watcher_cluster_data_model_collectors.storage]'' period = 900' - ' logger.go:42: 07:01:38 | watcher-tls/2-patch-mtls | + ''['' 0 == 1 '']''' - ' logger.go:42: 07:01:39 | watcher-tls/2-patch-mtls | running command: [sh -c set -euxo pipefail' - ' ' - ' oc project ${NAMESPACE}' - ' # Get pod names for each watcher service' - ' APIPOD=$(oc get pods -l service=watcher-api -o jsonpath=''{.items[0].metadata.name}'')' - ' APPLIERPOD=$(oc get pods -l service=watcher-applier -o jsonpath=''{.items[0].metadata.name}'')' - ' DECISIONENGINEPOD=$(oc get pods -l service=watcher-decision-engine -o jsonpath=''{.items[0].metadata.name}'')' - ' ' - ' # Verify memcached mTLS config parameters in watcher-api config' - ' if [ -n "${APIPOD}" ]; then' - ' echo "Checking watcher-api config..."' - ' [ $(echo $(oc rsh -c watcher-api $APIPOD cat /etc/watcher/watcher.conf.d/00-default.conf) | grep -c "memcache_tls_certfile = /etc/pki/tls/certs/mtls.crt ") == 1 ]' - ' [ $(echo $(oc rsh -c watcher-api $APIPOD cat /etc/watcher/watcher.conf.d/00-default.conf) | grep -c "memcache_tls_keyfile = /etc/pki/tls/private/mtls.key") == 1 ]' - ' [ $(echo $(oc rsh -c watcher-api $APIPOD cat /etc/watcher/watcher.conf.d/00-default.conf) | grep -c "memcache_tls_cafile = /etc/pki/tls/certs/mtls-ca.crt") == 1 ]' - ' [ $(echo $(oc rsh -c watcher-api $APIPOD cat /etc/watcher/watcher.conf.d/00-default.conf) | grep -c "memcache_tls_enabled = true") == 1 ]' - ' ' - ' # Verify mTLS config parameters in memcached backend config' - ' [ $(echo $(oc rsh -c watcher-api $APIPOD cat /etc/watcher/watcher.conf.d/00-default.conf) | grep -c "tls_certfile=/etc/pki/tls/certs/mtls.crt") == 1 ]' - ' [ $(echo $(oc rsh -c watcher-api $APIPOD cat /etc/watcher/watcher.conf.d/00-default.conf) | grep -c "tls_keyfile=/etc/pki/tls/private/mtls.key") == 1 ]' - ' [ $(echo $(oc rsh -c watcher-api $APIPOD cat /etc/watcher/watcher.conf.d/00-default.conf) | grep -c "tls_cafile=/etc/pki/tls/certs/mtls-ca.crt") == 1 ]' - ' else' - ' exit 1' - ' fi' - ' ' - ' # Verify memcached mTLS config parameters in watcher-applier config' - ' if [ -n "${APPLIERPOD}" ]; then' - ' echo "Checking watcher-applier config..."' - ' [ $(echo $(oc rsh -c watcher-applier $APPLIERPOD cat /etc/watcher/watcher.conf.d/00-default.conf) | grep -c "memcache_tls_certfile = /etc/pki/tls/certs/mtls.crt ") == 1 ]' - ' [ $(echo $(oc rsh -c watcher-applier $APPLIERPOD cat /etc/watcher/watcher.conf.d/00-default.conf) | grep -c "memcache_tls_keyfile = /etc/pki/tls/private/mtls.key") == 1 ]' - ' [ $(echo $(oc rsh -c watcher-applier $APPLIERPOD cat /etc/watcher/watcher.conf.d/00-default.conf) | grep -c "memcache_tls_cafile = /etc/pki/tls/certs/mtls-ca.crt") == 1 ]' - ' [ $(echo $(oc rsh -c watcher-applier $APPLIERPOD cat /etc/watcher/watcher.conf.d/00-default.conf) | grep -c "memcache_tls_enabled = true") == 1 ]' - ' ' - ' [ $(echo $(oc rsh -c watcher-applier $APPLIERPOD cat /etc/watcher/watcher.conf.d/00-default.conf) | grep -c "tls_certfile=/etc/pki/tls/certs/mtls.crt") == 1 ]' - ' [ $(echo $(oc rsh -c watcher-applier $APPLIERPOD cat /etc/watcher/watcher.conf.d/00-default.conf) | grep -c "tls_keyfile=/etc/pki/tls/private/mtls.key") == 1 ]' - ' [ $(echo $(oc rsh -c watcher-applier $APPLIERPOD cat /etc/watcher/watcher.conf.d/00-default.conf) | grep -c "tls_cafile=/etc/pki/tls/certs/mtls-ca.crt") == 1 ]' - ' else' - ' exit 1' - ' fi' - ' ' - ' # Verify memcached mTLS config parameters in watcher-decision-engine config' - ' if [ -n "${DECISIONENGINEPOD}" ]; then' - ' echo "Checking watcher-decision-engine config..."' - ' [ $(echo $(oc rsh -c watcher-decision-engine $DECISIONENGINEPOD cat /etc/watcher/watcher.conf.d/00-default.conf) | grep -c "memcache_tls_certfile = /etc/pki/tls/certs/mtls.crt ") == 1 ]' - ' [ $(echo $(oc rsh -c watcher-decision-engine $DECISIONENGINEPOD cat /etc/watcher/watcher.conf.d/00-default.conf) | grep -c "memcache_tls_keyfile = /etc/pki/tls/private/mtls.key") == 1 ]' - ' [ $(echo $(oc rsh -c watcher-decision-engine $DECISIONENGINEPOD cat /etc/watcher/watcher.conf.d/00-default.conf) | grep -c "memcache_tls_cafile = /etc/pki/tls/certs/mtls-ca.crt") == 1 ]' - ' [ $(echo $(oc rsh -c watcher-decision-engine $DECISIONENGINEPOD cat /etc/watcher/watcher.conf.d/00-default.conf) | grep -c "memcache_tls_enabled = true") == 1 ]' - ' ' - ' [ $(echo $(oc rsh -c watcher-decision-engine $DECISIONENGINEPOD cat /etc/watcher/watcher.conf.d/00-default.conf) | grep -c "tls_certfile=/etc/pki/tls/certs/mtls.crt") == 1 ]' - ' [ $(echo $(oc rsh -c watcher-decision-engine $DECISIONENGINEPOD cat /etc/watcher/watcher.conf.d/00-default.conf) | grep -c "tls_keyfile=/etc/pki/tls/private/mtls.key") == 1 ]' - ' [ $(echo $(oc rsh -c watcher-decision-engine $DECISIONENGINEPOD cat /etc/watcher/watcher.conf.d/00-default.conf) | grep -c "tls_cafile=/etc/pki/tls/certs/mtls-ca.crt") == 1 ]' - ' else' - ' exit 1' - ' fi' - ' ]' - ' logger.go:42: 07:01:39 | watcher-tls/2-patch-mtls | + oc project watcher-kuttl-default' - ' logger.go:42: 07:01:40 | watcher-tls/2-patch-mtls | Already on project "watcher-kuttl-default" on server "https://api.crc.testing:6443".' - ' logger.go:42: 07:01:40 | watcher-tls/2-patch-mtls | ++ oc get pods -l service=watcher-api -o ''jsonpath={.items[0].metadata.name}''' - ' logger.go:42: 07:01:40 | watcher-tls/2-patch-mtls | + APIPOD=watcher-kuttl-api-0' - ' logger.go:42: 07:01:40 | watcher-tls/2-patch-mtls | ++ oc get pods -l service=watcher-applier -o ''jsonpath={.items[0].metadata.name}''' - ' logger.go:42: 07:01:40 | watcher-tls/2-patch-mtls | + APPLIERPOD=watcher-kuttl-applier-0' - ' logger.go:42: 07:01:40 | watcher-tls/2-patch-mtls | ++ oc get pods -l service=watcher-decision-engine -o ''jsonpath={.items[0].metadata.name}''' - ' logger.go:42: 07:01:40 | watcher-tls/2-patch-mtls | + DECISIONENGINEPOD=watcher-kuttl-decision-engine-0' - ' logger.go:42: 07:01:40 | watcher-tls/2-patch-mtls | + ''['' -n watcher-kuttl-api-0 '']''' - ' logger.go:42: 07:01:40 | watcher-tls/2-patch-mtls | + echo ''Checking watcher-api config...''' - ' logger.go:42: 07:01:40 | watcher-tls/2-patch-mtls | Checking watcher-api config...' - ' logger.go:42: 07:01:40 | watcher-tls/2-patch-mtls | ++ grep -c ''memcache_tls_certfile = /etc/pki/tls/certs/mtls.crt ''' - ' logger.go:42: 07:01:40 | watcher-tls/2-patch-mtls | +++ oc rsh -c watcher-api watcher-kuttl-api-0 cat /etc/watcher/watcher.conf.d/00-default.conf' - ' logger.go:42: 07:01:40 | watcher-tls/2-patch-mtls | error: unable to upgrade connection: container not found ("watcher-api")' - ' logger.go:42: 07:01:40 | watcher-tls/2-patch-mtls | ++ echo' - ' logger.go:42: 07:01:40 | watcher-tls/2-patch-mtls | + ''['' 0 == 1 '']''' - ' logger.go:42: 07:01:41 | watcher-tls/2-patch-mtls | running command: [sh -c set -euxo pipefail' - ' ' - ' oc project ${NAMESPACE}' - ' # Get pod names for each watcher service' - ' APIPOD=$(oc get pods -l service=watcher-api -o jsonpath=''{.items[0].metadata.name}'')' - ' APPLIERPOD=$(oc get pods -l service=watcher-applier -o jsonpath=''{.items[0].metadata.name}'')' - ' DECISIONENGINEPOD=$(oc get pods -l service=watcher-decision-engine -o jsonpath=''{.items[0].metadata.name}'')' - ' ' - ' # Verify memcached mTLS config parameters in watcher-api config' - ' if [ -n "${APIPOD}" ]; then' - ' echo "Checking watcher-api config..."' - ' [ $(echo $(oc rsh -c watcher-api $APIPOD cat /etc/watcher/watcher.conf.d/00-default.conf) | grep -c "memcache_tls_certfile = /etc/pki/tls/certs/mtls.crt ") == 1 ]' - ' [ $(echo $(oc rsh -c watcher-api $APIPOD cat /etc/watcher/watcher.conf.d/00-default.conf) | grep -c "memcache_tls_keyfile = /etc/pki/tls/private/mtls.key") == 1 ]' - ' [ $(echo $(oc rsh -c watcher-api $APIPOD cat /etc/watcher/watcher.conf.d/00-default.conf) | grep -c "memcache_tls_cafile = /etc/pki/tls/certs/mtls-ca.crt") == 1 ]' - ' [ $(echo $(oc rsh -c watcher-api $APIPOD cat /etc/watcher/watcher.conf.d/00-default.conf) | grep -c "memcache_tls_enabled = true") == 1 ]' - ' ' - ' # Verify mTLS config parameters in memcached backend config' - ' [ $(echo $(oc rsh -c watcher-api $APIPOD cat /etc/watcher/watcher.conf.d/00-default.conf) | grep -c "tls_certfile=/etc/pki/tls/certs/mtls.crt") == 1 ]' - ' [ $(echo $(oc rsh -c watcher-api $APIPOD cat /etc/watcher/watcher.conf.d/00-default.conf) | grep -c "tls_keyfile=/etc/pki/tls/private/mtls.key") == 1 ]' - ' [ $(echo $(oc rsh -c watcher-api $APIPOD cat /etc/watcher/watcher.conf.d/00-default.conf) | grep -c "tls_cafile=/etc/pki/tls/certs/mtls-ca.crt") == 1 ]' - ' else' - ' exit 1' - ' fi' - ' ' - ' # Verify memcached mTLS config parameters in watcher-applier config' - ' if [ -n "${APPLIERPOD}" ]; then' - ' echo "Checking watcher-applier config..."' - ' [ $(echo $(oc rsh -c watcher-applier $APPLIERPOD cat /etc/watcher/watcher.conf.d/00-default.conf) | grep -c "memcache_tls_certfile = /etc/pki/tls/certs/mtls.crt ") == 1 ]' - ' [ $(echo $(oc rsh -c watcher-applier $APPLIERPOD cat /etc/watcher/watcher.conf.d/00-default.conf) | grep -c "memcache_tls_keyfile = /etc/pki/tls/private/mtls.key") == 1 ]' - ' [ $(echo $(oc rsh -c watcher-applier $APPLIERPOD cat /etc/watcher/watcher.conf.d/00-default.conf) | grep -c "memcache_tls_cafile = /etc/pki/tls/certs/mtls-ca.crt") == 1 ]' - ' [ $(echo $(oc rsh -c watcher-applier $APPLIERPOD cat /etc/watcher/watcher.conf.d/00-default.conf) | grep -c "memcache_tls_enabled = true") == 1 ]' - ' ' - ' [ $(echo $(oc rsh -c watcher-applier $APPLIERPOD cat /etc/watcher/watcher.conf.d/00-default.conf) | grep -c "tls_certfile=/etc/pki/tls/certs/mtls.crt") == 1 ]' - ' [ $(echo $(oc rsh -c watcher-applier $APPLIERPOD cat /etc/watcher/watcher.conf.d/00-default.conf) | grep -c "tls_keyfile=/etc/pki/tls/private/mtls.key") == 1 ]' - ' [ $(echo $(oc rsh -c watcher-applier $APPLIERPOD cat /etc/watcher/watcher.conf.d/00-default.conf) | grep -c "tls_cafile=/etc/pki/tls/certs/mtls-ca.crt") == 1 ]' - ' else' - ' exit 1' - ' fi' - ' ' - ' # Verify memcached mTLS config parameters in watcher-decision-engine config' - ' if [ -n "${DECISIONENGINEPOD}" ]; then' - ' echo "Checking watcher-decision-engine config..."' - ' [ $(echo $(oc rsh -c watcher-decision-engine $DECISIONENGINEPOD cat /etc/watcher/watcher.conf.d/00-default.conf) | grep -c "memcache_tls_certfile = /etc/pki/tls/certs/mtls.crt ") == 1 ]' - ' [ $(echo $(oc rsh -c watcher-decision-engine $DECISIONENGINEPOD cat /etc/watcher/watcher.conf.d/00-default.conf) | grep -c "memcache_tls_keyfile = /etc/pki/tls/private/mtls.key") == 1 ]' - ' [ $(echo $(oc rsh -c watcher-decision-engine $DECISIONENGINEPOD cat /etc/watcher/watcher.conf.d/00-default.conf) | grep -c "memcache_tls_cafile = /etc/pki/tls/certs/mtls-ca.crt") == 1 ]' - ' [ $(echo $(oc rsh -c watcher-decision-engine $DECISIONENGINEPOD cat /etc/watcher/watcher.conf.d/00-default.conf) | grep -c "memcache_tls_enabled = true") == 1 ]' - ' ' - ' [ $(echo $(oc rsh -c watcher-decision-engine $DECISIONENGINEPOD cat /etc/watcher/watcher.conf.d/00-default.conf) | grep -c "tls_certfile=/etc/pki/tls/certs/mtls.crt") == 1 ]' - ' [ $(echo $(oc rsh -c watcher-decision-engine $DECISIONENGINEPOD cat /etc/watcher/watcher.conf.d/00-default.conf) | grep -c "tls_keyfile=/etc/pki/tls/private/mtls.key") == 1 ]' - ' [ $(echo $(oc rsh -c watcher-decision-engine $DECISIONENGINEPOD cat /etc/watcher/watcher.conf.d/00-default.conf) | grep -c "tls_cafile=/etc/pki/tls/certs/mtls-ca.crt") == 1 ]' - ' else' - ' exit 1' - ' fi' - ' ]' - ' logger.go:42: 07:01:41 | watcher-tls/2-patch-mtls | + oc project watcher-kuttl-default' - ' logger.go:42: 07:01:41 | watcher-tls/2-patch-mtls | Already on project "watcher-kuttl-default" on server "https://api.crc.testing:6443".' - ' logger.go:42: 07:01:41 | watcher-tls/2-patch-mtls | ++ oc get pods -l service=watcher-api -o ''jsonpath={.items[0].metadata.name}''' - ' logger.go:42: 07:01:42 | watcher-tls/2-patch-mtls | + APIPOD=watcher-kuttl-api-0' - ' logger.go:42: 07:01:42 | watcher-tls/2-patch-mtls | ++ oc get pods -l service=watcher-applier -o ''jsonpath={.items[0].metadata.name}''' - ' logger.go:42: 07:01:42 | watcher-tls/2-patch-mtls | + APPLIERPOD=watcher-kuttl-applier-0' - ' logger.go:42: 07:01:42 | watcher-tls/2-patch-mtls | ++ oc get pods -l service=watcher-decision-engine -o ''jsonpath={.items[0].metadata.name}''' - ' logger.go:42: 07:01:42 | watcher-tls/2-patch-mtls | + DECISIONENGINEPOD=watcher-kuttl-decision-engine-0' - ' logger.go:42: 07:01:42 | watcher-tls/2-patch-mtls | + ''['' -n watcher-kuttl-api-0 '']''' - ' logger.go:42: 07:01:42 | watcher-tls/2-patch-mtls | + echo ''Checking watcher-api config...''' - ' logger.go:42: 07:01:42 | watcher-tls/2-patch-mtls | Checking watcher-api config...' - ' logger.go:42: 07:01:42 | watcher-tls/2-patch-mtls | ++ grep -c ''memcache_tls_certfile = /etc/pki/tls/certs/mtls.crt ''' - ' logger.go:42: 07:01:42 | watcher-tls/2-patch-mtls | +++ oc rsh -c watcher-api watcher-kuttl-api-0 cat /etc/watcher/watcher.conf.d/00-default.conf' - ' logger.go:42: 07:01:42 | watcher-tls/2-patch-mtls | error: unable to upgrade connection: container not found ("watcher-api")' - ' logger.go:42: 07:01:42 | watcher-tls/2-patch-mtls | ++ echo' - ' logger.go:42: 07:01:42 | watcher-tls/2-patch-mtls | + ''['' 0 == 1 '']''' - ' logger.go:42: 07:01:43 | watcher-tls/2-patch-mtls | running command: [sh -c set -euxo pipefail' - ' ' - ' oc project ${NAMESPACE}' - ' # Get pod names for each watcher service' - ' APIPOD=$(oc get pods -l service=watcher-api -o jsonpath=''{.items[0].metadata.name}'')' - ' APPLIERPOD=$(oc get pods -l service=watcher-applier -o jsonpath=''{.items[0].metadata.name}'')' - ' DECISIONENGINEPOD=$(oc get pods -l service=watcher-decision-engine -o jsonpath=''{.items[0].metadata.name}'')' - ' ' - ' # Verify memcached mTLS config parameters in watcher-api config' - ' if [ -n "${APIPOD}" ]; then' - ' echo "Checking watcher-api config..."' - ' [ $(echo $(oc rsh -c watcher-api $APIPOD cat /etc/watcher/watcher.conf.d/00-default.conf) | grep -c "memcache_tls_certfile = /etc/pki/tls/certs/mtls.crt ") == 1 ]' - ' [ $(echo $(oc rsh -c watcher-api $APIPOD cat /etc/watcher/watcher.conf.d/00-default.conf) | grep -c "memcache_tls_keyfile = /etc/pki/tls/private/mtls.key") == 1 ]' - ' [ $(echo $(oc rsh -c watcher-api $APIPOD cat /etc/watcher/watcher.conf.d/00-default.conf) | grep -c "memcache_tls_cafile = /etc/pki/tls/certs/mtls-ca.crt") == 1 ]' - ' [ $(echo $(oc rsh -c watcher-api $APIPOD cat /etc/watcher/watcher.conf.d/00-default.conf) | grep -c "memcache_tls_enabled = true") == 1 ]' - ' ' - ' # Verify mTLS config parameters in memcached backend config' - ' [ $(echo $(oc rsh -c watcher-api $APIPOD cat /etc/watcher/watcher.conf.d/00-default.conf) | grep -c "tls_certfile=/etc/pki/tls/certs/mtls.crt") == 1 ]' - ' [ $(echo $(oc rsh -c watcher-api $APIPOD cat /etc/watcher/watcher.conf.d/00-default.conf) | grep -c "tls_keyfile=/etc/pki/tls/private/mtls.key") == 1 ]' - ' [ $(echo $(oc rsh -c watcher-api $APIPOD cat /etc/watcher/watcher.conf.d/00-default.conf) | grep -c "tls_cafile=/etc/pki/tls/certs/mtls-ca.crt") == 1 ]' - ' else' - ' exit 1' - ' fi' - ' ' - ' # Verify memcached mTLS config parameters in watcher-applier config' - ' if [ -n "${APPLIERPOD}" ]; then' - ' echo "Checking watcher-applier config..."' - ' [ $(echo $(oc rsh -c watcher-applier $APPLIERPOD cat /etc/watcher/watcher.conf.d/00-default.conf) | grep -c "memcache_tls_certfile = /etc/pki/tls/certs/mtls.crt ") == 1 ]' - ' [ $(echo $(oc rsh -c watcher-applier $APPLIERPOD cat /etc/watcher/watcher.conf.d/00-default.conf) | grep -c "memcache_tls_keyfile = /etc/pki/tls/private/mtls.key") == 1 ]' - ' [ $(echo $(oc rsh -c watcher-applier $APPLIERPOD cat /etc/watcher/watcher.conf.d/00-default.conf) | grep -c "memcache_tls_cafile = /etc/pki/tls/certs/mtls-ca.crt") == 1 ]' - ' [ $(echo $(oc rsh -c watcher-applier $APPLIERPOD cat /etc/watcher/watcher.conf.d/00-default.conf) | grep -c "memcache_tls_enabled = true") == 1 ]' - ' ' - ' [ $(echo $(oc rsh -c watcher-applier $APPLIERPOD cat /etc/watcher/watcher.conf.d/00-default.conf) | grep -c "tls_certfile=/etc/pki/tls/certs/mtls.crt") == 1 ]' - ' [ $(echo $(oc rsh -c watcher-applier $APPLIERPOD cat /etc/watcher/watcher.conf.d/00-default.conf) | grep -c "tls_keyfile=/etc/pki/tls/private/mtls.key") == 1 ]' - ' [ $(echo $(oc rsh -c watcher-applier $APPLIERPOD cat /etc/watcher/watcher.conf.d/00-default.conf) | grep -c "tls_cafile=/etc/pki/tls/certs/mtls-ca.crt") == 1 ]' - ' else' - ' exit 1' - ' fi' - ' ' - ' # Verify memcached mTLS config parameters in watcher-decision-engine config' - ' if [ -n "${DECISIONENGINEPOD}" ]; then' - ' echo "Checking watcher-decision-engine config..."' - ' [ $(echo $(oc rsh -c watcher-decision-engine $DECISIONENGINEPOD cat /etc/watcher/watcher.conf.d/00-default.conf) | grep -c "memcache_tls_certfile = /etc/pki/tls/certs/mtls.crt ") == 1 ]' - ' [ $(echo $(oc rsh -c watcher-decision-engine $DECISIONENGINEPOD cat /etc/watcher/watcher.conf.d/00-default.conf) | grep -c "memcache_tls_keyfile = /etc/pki/tls/private/mtls.key") == 1 ]' - ' [ $(echo $(oc rsh -c watcher-decision-engine $DECISIONENGINEPOD cat /etc/watcher/watcher.conf.d/00-default.conf) | grep -c "memcache_tls_cafile = /etc/pki/tls/certs/mtls-ca.crt") == 1 ]' - ' [ $(echo $(oc rsh -c watcher-decision-engine $DECISIONENGINEPOD cat /etc/watcher/watcher.conf.d/00-default.conf) | grep -c "memcache_tls_enabled = true") == 1 ]' - ' ' - ' [ $(echo $(oc rsh -c watcher-decision-engine $DECISIONENGINEPOD cat /etc/watcher/watcher.conf.d/00-default.conf) | grep -c "tls_certfile=/etc/pki/tls/certs/mtls.crt") == 1 ]' - ' [ $(echo $(oc rsh -c watcher-decision-engine $DECISIONENGINEPOD cat /etc/watcher/watcher.conf.d/00-default.conf) | grep -c "tls_keyfile=/etc/pki/tls/private/mtls.key") == 1 ]' - ' [ $(echo $(oc rsh -c watcher-decision-engine $DECISIONENGINEPOD cat /etc/watcher/watcher.conf.d/00-default.conf) | grep -c "tls_cafile=/etc/pki/tls/certs/mtls-ca.crt") == 1 ]' - ' else' - ' exit 1' - ' fi' - ' ]' - ' logger.go:42: 07:01:43 | watcher-tls/2-patch-mtls | + oc project watcher-kuttl-default' - ' logger.go:42: 07:01:44 | watcher-tls/2-patch-mtls | Already on project "watcher-kuttl-default" on server "https://api.crc.testing:6443".' - ' logger.go:42: 07:01:44 | watcher-tls/2-patch-mtls | ++ oc get pods -l service=watcher-api -o ''jsonpath={.items[0].metadata.name}''' - ' logger.go:42: 07:01:44 | watcher-tls/2-patch-mtls | + APIPOD=watcher-kuttl-api-0' - ' logger.go:42: 07:01:44 | watcher-tls/2-patch-mtls | ++ oc get pods -l service=watcher-applier -o ''jsonpath={.items[0].metadata.name}''' - ' logger.go:42: 07:01:44 | watcher-tls/2-patch-mtls | + APPLIERPOD=watcher-kuttl-applier-0' - ' logger.go:42: 07:01:44 | watcher-tls/2-patch-mtls | ++ oc get pods -l service=watcher-decision-engine -o ''jsonpath={.items[0].metadata.name}''' - ' logger.go:42: 07:01:44 | watcher-tls/2-patch-mtls | + DECISIONENGINEPOD=watcher-kuttl-decision-engine-0' - ' logger.go:42: 07:01:44 | watcher-tls/2-patch-mtls | + ''['' -n watcher-kuttl-api-0 '']''' - ' logger.go:42: 07:01:44 | watcher-tls/2-patch-mtls | + echo ''Checking watcher-api config...''' - ' logger.go:42: 07:01:44 | watcher-tls/2-patch-mtls | Checking watcher-api config...' - ' logger.go:42: 07:01:44 | watcher-tls/2-patch-mtls | ++ grep -c ''memcache_tls_certfile = /etc/pki/tls/certs/mtls.crt ''' - ' logger.go:42: 07:01:44 | watcher-tls/2-patch-mtls | +++ oc rsh -c watcher-api watcher-kuttl-api-0 cat /etc/watcher/watcher.conf.d/00-default.conf' - ' logger.go:42: 07:01:44 | watcher-tls/2-patch-mtls | error: unable to upgrade connection: container not found ("watcher-api")' - ' logger.go:42: 07:01:44 | watcher-tls/2-patch-mtls | ++ echo' - ' logger.go:42: 07:01:44 | watcher-tls/2-patch-mtls | + ''['' 0 == 1 '']''' - ' logger.go:42: 07:01:45 | watcher-tls/2-patch-mtls | running command: [sh -c set -euxo pipefail' - ' ' - ' oc project ${NAMESPACE}' - ' # Get pod names for each watcher service' - ' APIPOD=$(oc get pods -l service=watcher-api -o jsonpath=''{.items[0].metadata.name}'')' - ' APPLIERPOD=$(oc get pods -l service=watcher-applier -o jsonpath=''{.items[0].metadata.name}'')' - ' DECISIONENGINEPOD=$(oc get pods -l service=watcher-decision-engine -o jsonpath=''{.items[0].metadata.name}'')' - ' ' - ' # Verify memcached mTLS config parameters in watcher-api config' - ' if [ -n "${APIPOD}" ]; then' - ' echo "Checking watcher-api config..."' - ' [ $(echo $(oc rsh -c watcher-api $APIPOD cat /etc/watcher/watcher.conf.d/00-default.conf) | grep -c "memcache_tls_certfile = /etc/pki/tls/certs/mtls.crt ") == 1 ]' - ' [ $(echo $(oc rsh -c watcher-api $APIPOD cat /etc/watcher/watcher.conf.d/00-default.conf) | grep -c "memcache_tls_keyfile = /etc/pki/tls/private/mtls.key") == 1 ]' - ' [ $(echo $(oc rsh -c watcher-api $APIPOD cat /etc/watcher/watcher.conf.d/00-default.conf) | grep -c "memcache_tls_cafile = /etc/pki/tls/certs/mtls-ca.crt") == 1 ]' - ' [ $(echo $(oc rsh -c watcher-api $APIPOD cat /etc/watcher/watcher.conf.d/00-default.conf) | grep -c "memcache_tls_enabled = true") == 1 ]' - ' ' - ' # Verify mTLS config parameters in memcached backend config' - ' [ $(echo $(oc rsh -c watcher-api $APIPOD cat /etc/watcher/watcher.conf.d/00-default.conf) | grep -c "tls_certfile=/etc/pki/tls/certs/mtls.crt") == 1 ]' - ' [ $(echo $(oc rsh -c watcher-api $APIPOD cat /etc/watcher/watcher.conf.d/00-default.conf) | grep -c "tls_keyfile=/etc/pki/tls/private/mtls.key") == 1 ]' - ' [ $(echo $(oc rsh -c watcher-api $APIPOD cat /etc/watcher/watcher.conf.d/00-default.conf) | grep -c "tls_cafile=/etc/pki/tls/certs/mtls-ca.crt") == 1 ]' - ' else' - ' exit 1' - ' fi' - ' ' - ' # Verify memcached mTLS config parameters in watcher-applier config' - ' if [ -n "${APPLIERPOD}" ]; then' - ' echo "Checking watcher-applier config..."' - ' [ $(echo $(oc rsh -c watcher-applier $APPLIERPOD cat /etc/watcher/watcher.conf.d/00-default.conf) | grep -c "memcache_tls_certfile = /etc/pki/tls/certs/mtls.crt ") == 1 ]' - ' [ $(echo $(oc rsh -c watcher-applier $APPLIERPOD cat /etc/watcher/watcher.conf.d/00-default.conf) | grep -c "memcache_tls_keyfile = /etc/pki/tls/private/mtls.key") == 1 ]' - ' [ $(echo $(oc rsh -c watcher-applier $APPLIERPOD cat /etc/watcher/watcher.conf.d/00-default.conf) | grep -c "memcache_tls_cafile = /etc/pki/tls/certs/mtls-ca.crt") == 1 ]' - ' [ $(echo $(oc rsh -c watcher-applier $APPLIERPOD cat /etc/watcher/watcher.conf.d/00-default.conf) | grep -c "memcache_tls_enabled = true") == 1 ]' - ' ' - ' [ $(echo $(oc rsh -c watcher-applier $APPLIERPOD cat /etc/watcher/watcher.conf.d/00-default.conf) | grep -c "tls_certfile=/etc/pki/tls/certs/mtls.crt") == 1 ]' - ' [ $(echo $(oc rsh -c watcher-applier $APPLIERPOD cat /etc/watcher/watcher.conf.d/00-default.conf) | grep -c "tls_keyfile=/etc/pki/tls/private/mtls.key") == 1 ]' - ' [ $(echo $(oc rsh -c watcher-applier $APPLIERPOD cat /etc/watcher/watcher.conf.d/00-default.conf) | grep -c "tls_cafile=/etc/pki/tls/certs/mtls-ca.crt") == 1 ]' - ' else' - ' exit 1' - ' fi' - ' ' - ' # Verify memcached mTLS config parameters in watcher-decision-engine config' - ' if [ -n "${DECISIONENGINEPOD}" ]; then' - ' echo "Checking watcher-decision-engine config..."' - ' [ $(echo $(oc rsh -c watcher-decision-engine $DECISIONENGINEPOD cat /etc/watcher/watcher.conf.d/00-default.conf) | grep -c "memcache_tls_certfile = /etc/pki/tls/certs/mtls.crt ") == 1 ]' - ' [ $(echo $(oc rsh -c watcher-decision-engine $DECISIONENGINEPOD cat /etc/watcher/watcher.conf.d/00-default.conf) | grep -c "memcache_tls_keyfile = /etc/pki/tls/private/mtls.key") == 1 ]' - ' [ $(echo $(oc rsh -c watcher-decision-engine $DECISIONENGINEPOD cat /etc/watcher/watcher.conf.d/00-default.conf) | grep -c "memcache_tls_cafile = /etc/pki/tls/certs/mtls-ca.crt") == 1 ]' - ' [ $(echo $(oc rsh -c watcher-decision-engine $DECISIONENGINEPOD cat /etc/watcher/watcher.conf.d/00-default.conf) | grep -c "memcache_tls_enabled = true") == 1 ]' - ' ' - ' [ $(echo $(oc rsh -c watcher-decision-engine $DECISIONENGINEPOD cat /etc/watcher/watcher.conf.d/00-default.conf) | grep -c "tls_certfile=/etc/pki/tls/certs/mtls.crt") == 1 ]' - ' [ $(echo $(oc rsh -c watcher-decision-engine $DECISIONENGINEPOD cat /etc/watcher/watcher.conf.d/00-default.conf) | grep -c "tls_keyfile=/etc/pki/tls/private/mtls.key") == 1 ]' - ' [ $(echo $(oc rsh -c watcher-decision-engine $DECISIONENGINEPOD cat /etc/watcher/watcher.conf.d/00-default.conf) | grep -c "tls_cafile=/etc/pki/tls/certs/mtls-ca.crt") == 1 ]' - ' else' - ' exit 1' - ' fi' - ' ]' - ' logger.go:42: 07:01:46 | watcher-tls/2-patch-mtls | + oc project watcher-kuttl-default' - ' logger.go:42: 07:01:46 | watcher-tls/2-patch-mtls | Already on project "watcher-kuttl-default" on server "https://api.crc.testing:6443".' - ' logger.go:42: 07:01:46 | watcher-tls/2-patch-mtls | ++ oc get pods -l service=watcher-api -o ''jsonpath={.items[0].metadata.name}''' - ' logger.go:42: 07:01:46 | watcher-tls/2-patch-mtls | + APIPOD=watcher-kuttl-api-0' - ' logger.go:42: 07:01:46 | watcher-tls/2-patch-mtls | ++ oc get pods -l service=watcher-applier -o ''jsonpath={.items[0].metadata.name}''' - ' logger.go:42: 07:01:46 | watcher-tls/2-patch-mtls | + APPLIERPOD=watcher-kuttl-applier-0' - ' logger.go:42: 07:01:46 | watcher-tls/2-patch-mtls | ++ oc get pods -l service=watcher-decision-engine -o ''jsonpath={.items[0].metadata.name}''' - ' logger.go:42: 07:01:46 | watcher-tls/2-patch-mtls | + DECISIONENGINEPOD=watcher-kuttl-decision-engine-0' - ' logger.go:42: 07:01:46 | watcher-tls/2-patch-mtls | + ''['' -n watcher-kuttl-api-0 '']''' - ' logger.go:42: 07:01:46 | watcher-tls/2-patch-mtls | + echo ''Checking watcher-api config...''' - ' logger.go:42: 07:01:46 | watcher-tls/2-patch-mtls | Checking watcher-api config...' - ' logger.go:42: 07:01:46 | watcher-tls/2-patch-mtls | ++ grep -c ''memcache_tls_certfile = /etc/pki/tls/certs/mtls.crt ''' - ' logger.go:42: 07:01:46 | watcher-tls/2-patch-mtls | +++ oc rsh -c watcher-api watcher-kuttl-api-0 cat /etc/watcher/watcher.conf.d/00-default.conf' - ' logger.go:42: 07:01:46 | watcher-tls/2-patch-mtls | ++ echo ''[DEFAULT]'' state_path = /var/lib/watcher transport_url = ''rabbit://**********=1'' control_exchange = watcher debug = True log_file = /var/log/watcher/watcher-kuttl-api.log ''#'' empty notification_level means that no notification will be sent notification_level = ''[database]'' connection = ''mysql+pymysql://watcher_95d6:0107df271c530ebe460a742c5441b891@openstack.watcher-kuttl-default.svc/watcher?read_default_file=/etc/my.cnf'' ''[oslo_policy]'' policy_file = /etc/watcher/policy.yaml.sample ''[oslo_messaging_notifications]'' driver = noop ''[oslo_messaging_rabbit]'' rabbit_quorum_queue=true rabbit_transient_quorum_queue=true amqp_durable_queues=true ''[keystone_authtoken]'' memcached_servers=memcached-0.memcached.watcher-kuttl-default.svc:11212 tls_enabled=true memcache_tls_certfile = /etc/pki/tls/certs/mtls.crt memcache_tls_keyfile = /etc/pki/tls/private/mtls.key memcache_tls_cafile = /etc/pki/tls/certs/mtls-ca.crt memcache_tls_enabled = true project_domain_name = Default project_name = service user_domain_name = Default password = pa**********rd username = watcher auth_url = https://keystone-internal.watcher-kuttl-default.svc:5000 interface = internal auth_type = password cafile = /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem ''[watcher_clients_auth]'' project_domain_name = Default project_name = service user_domain_name = Default password = pa**********rd username = watcher auth_url = https://keystone-internal.watcher-kuttl-default.svc:5000 interface = internal auth_type = password cafile = /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem ''[oslo_concurrency]'' lock_path = /var/lib/watcher/tmp ''[watcher_datasources]'' datasources = prometheus ''[cache]'' backend = oslo_cache.memcache_pool memcache_servers=memcached-0.memcached.watcher-kuttl-default.svc:11212 memcache_socket_timeout = 0.5 memcache_pool_connection_get_timeout = 1 enabled=true tls_enabled=true tls_certfile=/etc/pki/tls/certs/mtls.crt tls_keyfile=/etc/pki/tls/private/mtls.key tls_cafile=/etc/pki/tls/certs/mtls-ca.crt memcache_dead_retry = 30 ''[prometheus_client]'' host = metric-storage-prometheus.watcher-kuttl-default.svc port = 9090 cafile = /etc/pki/ca-trust/extracted/pem/prometheus/ca.crt ''[cinder_client]'' endpoint_type = internal ''[glance_client]'' endpoint_type = internal ''[ironic_client]'' endpoint_type = internal ''[keystone_client]'' interface = internal ''[neutron_client]'' endpoint_type = internal ''[nova_client]'' endpoint_type = internal ''[placement_client]'' interface = internal ''[watcher_cluster_data_model_collectors.compute]'' period = 900 ''[watcher_cluster_data_model_collectors.baremetal]'' period = 900 ''[watcher_cluster_data_model_collectors.storage]'' period = 900' - ' logger.go:42: 07:01:46 | watcher-tls/2-patch-mtls | + ''['' 1 == 1 '']''' - ' logger.go:42: 07:01:46 | watcher-tls/2-patch-mtls | ++ grep -c ''memcache_tls_keyfile = /etc/pki/tls/private/mtls.key''' - ' logger.go:42: 07:01:46 | watcher-tls/2-patch-mtls | +++ oc rsh -c watcher-api watcher-kuttl-api-0 cat /etc/watcher/watcher.conf.d/00-default.conf' - ' logger.go:42: 07:01:47 | watcher-tls/2-patch-mtls | ++ echo ''[DEFAULT]'' state_path = /var/lib/watcher transport_url = ''rabbit://**********=1'' control_exchange = watcher debug = True log_file = /var/log/watcher/watcher-kuttl-api.log ''#'' empty notification_level means that no notification will be sent notification_level = ''[database]'' connection = ''mysql+pymysql://watcher_95d6:0107df271c530ebe460a742c5441b891@openstack.watcher-kuttl-default.svc/watcher?read_default_file=/etc/my.cnf'' ''[oslo_policy]'' policy_file = /etc/watcher/policy.yaml.sample ''[oslo_messaging_notifications]'' driver = noop ''[oslo_messaging_rabbit]'' rabbit_quorum_queue=true rabbit_transient_quorum_queue=true amqp_durable_queues=true ''[keystone_authtoken]'' memcached_servers=memcached-0.memcached.watcher-kuttl-default.svc:11212 tls_enabled=true memcache_tls_certfile = /etc/pki/tls/certs/mtls.crt memcache_tls_keyfile = /etc/pki/tls/private/mtls.key memcache_tls_cafile = /etc/pki/tls/certs/mtls-ca.crt memcache_tls_enabled = true project_domain_name = Default project_name = service user_domain_name = Default password = pa**********rd username = watcher auth_url = https://keystone-internal.watcher-kuttl-default.svc:5000 interface = internal auth_type = password cafile = /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem ''[watcher_clients_auth]'' project_domain_name = Default project_name = service user_domain_name = Default password = pa**********rd username = watcher auth_url = https://keystone-internal.watcher-kuttl-default.svc:5000 interface = internal auth_type = password cafile = /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem ''[oslo_concurrency]'' lock_path = /var/lib/watcher/tmp ''[watcher_datasources]'' datasources = prometheus ''[cache]'' backend = oslo_cache.memcache_pool memcache_servers=memcached-0.memcached.watcher-kuttl-default.svc:11212 memcache_socket_timeout = 0.5 memcache_pool_connection_get_timeout = 1 enabled=true tls_enabled=true tls_certfile=/etc/pki/tls/certs/mtls.crt tls_keyfile=/etc/pki/tls/private/mtls.key tls_cafile=/etc/pki/tls/certs/mtls-ca.crt memcache_dead_retry = 30 ''[prometheus_client]'' host = metric-storage-prometheus.watcher-kuttl-default.svc port = 9090 cafile = /etc/pki/ca-trust/extracted/pem/prometheus/ca.crt ''[cinder_client]'' endpoint_type = internal ''[glance_client]'' endpoint_type = internal ''[ironic_client]'' endpoint_type = internal ''[keystone_client]'' interface = internal ''[neutron_client]'' endpoint_type = internal ''[nova_client]'' endpoint_type = internal ''[placement_client]'' interface = internal ''[watcher_cluster_data_model_collectors.compute]'' period = 900 ''[watcher_cluster_data_model_collectors.baremetal]'' period = 900 ''[watcher_cluster_data_model_collectors.storage]'' period = 900' - ' logger.go:42: 07:01:47 | watcher-tls/2-patch-mtls | + ''['' 1 == 1 '']''' - ' logger.go:42: 07:01:47 | watcher-tls/2-patch-mtls | ++ grep -c ''memcache_tls_cafile = /etc/pki/tls/certs/mtls-ca.crt''' - ' logger.go:42: 07:01:47 | watcher-tls/2-patch-mtls | +++ oc rsh -c watcher-api watcher-kuttl-api-0 cat /etc/watcher/watcher.conf.d/00-default.conf' - ' logger.go:42: 07:01:47 | watcher-tls/2-patch-mtls | ++ echo ''[DEFAULT]'' state_path = /var/lib/watcher transport_url = ''rabbit://**********=1'' control_exchange = watcher debug = True log_file = /var/log/watcher/watcher-kuttl-api.log ''#'' empty notification_level means that no notification will be sent notification_level = ''[database]'' connection = ''mysql+pymysql://watcher_95d6:0107df271c530ebe460a742c5441b891@openstack.watcher-kuttl-default.svc/watcher?read_default_file=/etc/my.cnf'' ''[oslo_policy]'' policy_file = /etc/watcher/policy.yaml.sample ''[oslo_messaging_notifications]'' driver = noop ''[oslo_messaging_rabbit]'' rabbit_quorum_queue=true rabbit_transient_quorum_queue=true amqp_durable_queues=true ''[keystone_authtoken]'' memcached_servers=memcached-0.memcached.watcher-kuttl-default.svc:11212 tls_enabled=true memcache_tls_certfile = /etc/pki/tls/certs/mtls.crt memcache_tls_keyfile = /etc/pki/tls/private/mtls.key memcache_tls_cafile = /etc/pki/tls/certs/mtls-ca.crt memcache_tls_enabled = true project_domain_name = Default project_name = service user_domain_name = Default password = pa**********rd username = watcher auth_url = https://keystone-internal.watcher-kuttl-default.svc:5000 interface = internal auth_type = password cafile = /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem ''[watcher_clients_auth]'' project_domain_name = Default project_name = service user_domain_name = Default password = pa**********rd username = watcher auth_url = https://keystone-internal.watcher-kuttl-default.svc:5000 interface = internal auth_type = password cafile = /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem ''[oslo_concurrency]'' lock_path = /var/lib/watcher/tmp ''[watcher_datasources]'' datasources = prometheus ''[cache]'' backend = oslo_cache.memcache_pool memcache_servers=memcached-0.memcached.watcher-kuttl-default.svc:11212 memcache_socket_timeout = 0.5 memcache_pool_connection_get_timeout = 1 enabled=true tls_enabled=true tls_certfile=/etc/pki/tls/certs/mtls.crt tls_keyfile=/etc/pki/tls/private/mtls.key tls_cafile=/etc/pki/tls/certs/mtls-ca.crt memcache_dead_retry = 30 ''[prometheus_client]'' host = metric-storage-prometheus.watcher-kuttl-default.svc port = 9090 cafile = /etc/pki/ca-trust/extracted/pem/prometheus/ca.crt ''[cinder_client]'' endpoint_type = internal ''[glance_client]'' endpoint_type = internal ''[ironic_client]'' endpoint_type = internal ''[keystone_client]'' interface = internal ''[neutron_client]'' endpoint_type = internal ''[nova_client]'' endpoint_type = internal ''[placement_client]'' interface = internal ''[watcher_cluster_data_model_collectors.compute]'' period = 900 ''[watcher_cluster_data_model_collectors.baremetal]'' period = 900 ''[watcher_cluster_data_model_collectors.storage]'' period = 900' - ' logger.go:42: 07:01:47 | watcher-tls/2-patch-mtls | + ''['' 1 == 1 '']''' - ' logger.go:42: 07:01:47 | watcher-tls/2-patch-mtls | ++ grep -c ''memcache_tls_enabled = true''' - ' logger.go:42: 07:01:47 | watcher-tls/2-patch-mtls | +++ oc rsh -c watcher-api watcher-kuttl-api-0 cat /etc/watcher/watcher.conf.d/00-default.conf' - ' logger.go:42: 07:01:47 | watcher-tls/2-patch-mtls | ++ echo ''[DEFAULT]'' state_path = /var/lib/watcher transport_url = ''rabbit://**********=1'' control_exchange = watcher debug = True log_file = /var/log/watcher/watcher-kuttl-api.log ''#'' empty notification_level means that no notification will be sent notification_level = ''[database]'' connection = ''mysql+pymysql://watcher_95d6:0107df271c530ebe460a742c5441b891@openstack.watcher-kuttl-default.svc/watcher?read_default_file=/etc/my.cnf'' ''[oslo_policy]'' policy_file = /etc/watcher/policy.yaml.sample ''[oslo_messaging_notifications]'' driver = noop ''[oslo_messaging_rabbit]'' rabbit_quorum_queue=true rabbit_transient_quorum_queue=true amqp_durable_queues=true ''[keystone_authtoken]'' memcached_servers=memcached-0.memcached.watcher-kuttl-default.svc:11212 tls_enabled=true memcache_tls_certfile = /etc/pki/tls/certs/mtls.crt memcache_tls_keyfile = /etc/pki/tls/private/mtls.key memcache_tls_cafile = /etc/pki/tls/certs/mtls-ca.crt memcache_tls_enabled = true project_domain_name = Default project_name = service user_domain_name = Default password = pa**********rd username = watcher auth_url = https://keystone-internal.watcher-kuttl-default.svc:5000 interface = internal auth_type = password cafile = /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem ''[watcher_clients_auth]'' project_domain_name = Default project_name = service user_domain_name = Default password = pa**********rd username = watcher auth_url = https://keystone-internal.watcher-kuttl-default.svc:5000 interface = internal auth_type = password cafile = /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem ''[oslo_concurrency]'' lock_path = /var/lib/watcher/tmp ''[watcher_datasources]'' datasources = prometheus ''[cache]'' backend = oslo_cache.memcache_pool memcache_servers=memcached-0.memcached.watcher-kuttl-default.svc:11212 memcache_socket_timeout = 0.5 memcache_pool_connection_get_timeout = 1 enabled=true tls_enabled=true tls_certfile=/etc/pki/tls/certs/mtls.crt tls_keyfile=/etc/pki/tls/private/mtls.key tls_cafile=/etc/pki/tls/certs/mtls-ca.crt memcache_dead_retry = 30 ''[prometheus_client]'' host = metric-storage-prometheus.watcher-kuttl-default.svc port = 9090 cafile = /etc/pki/ca-trust/extracted/pem/prometheus/ca.crt ''[cinder_client]'' endpoint_type = internal ''[glance_client]'' endpoint_type = internal ''[ironic_client]'' endpoint_type = internal ''[keystone_client]'' interface = internal ''[neutron_client]'' endpoint_type = internal ''[nova_client]'' endpoint_type = internal ''[placement_client]'' interface = internal ''[watcher_cluster_data_model_collectors.compute]'' period = 900 ''[watcher_cluster_data_model_collectors.baremetal]'' period = 900 ''[watcher_cluster_data_model_collectors.storage]'' period = 900' - ' logger.go:42: 07:01:47 | watcher-tls/2-patch-mtls | + ''['' 1 == 1 '']''' - ' logger.go:42: 07:01:47 | watcher-tls/2-patch-mtls | ++ grep -c tls_certfile=/etc/pki/tls/certs/mtls.crt' - ' logger.go:42: 07:01:47 | watcher-tls/2-patch-mtls | +++ oc rsh -c watcher-api watcher-kuttl-api-0 cat /etc/watcher/watcher.conf.d/00-default.conf' - ' logger.go:42: 07:01:48 | watcher-tls/2-patch-mtls | ++ echo ''[DEFAULT]'' state_path = /var/lib/watcher transport_url = ''rabbit://**********=1'' control_exchange = watcher debug = True log_file = /var/log/watcher/watcher-kuttl-api.log ''#'' empty notification_level means that no notification will be sent notification_level = ''[database]'' connection = ''mysql+pymysql://watcher_95d6:0107df271c530ebe460a742c5441b891@openstack.watcher-kuttl-default.svc/watcher?read_default_file=/etc/my.cnf'' ''[oslo_policy]'' policy_file = /etc/watcher/policy.yaml.sample ''[oslo_messaging_notifications]'' driver = noop ''[oslo_messaging_rabbit]'' rabbit_quorum_queue=true rabbit_transient_quorum_queue=true amqp_durable_queues=true ''[keystone_authtoken]'' memcached_servers=memcached-0.memcached.watcher-kuttl-default.svc:11212 tls_enabled=true memcache_tls_certfile = /etc/pki/tls/certs/mtls.crt memcache_tls_keyfile = /etc/pki/tls/private/mtls.key memcache_tls_cafile = /etc/pki/tls/certs/mtls-ca.crt memcache_tls_enabled = true project_domain_name = Default project_name = service user_domain_name = Default password = pa**********rd username = watcher auth_url = https://keystone-internal.watcher-kuttl-default.svc:5000 interface = internal auth_type = password cafile = /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem ''[watcher_clients_auth]'' project_domain_name = Default project_name = service user_domain_name = Default password = pa**********rd username = watcher auth_url = https://keystone-internal.watcher-kuttl-default.svc:5000 interface = internal auth_type = password cafile = /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem ''[oslo_concurrency]'' lock_path = /var/lib/watcher/tmp ''[watcher_datasources]'' datasources = prometheus ''[cache]'' backend = oslo_cache.memcache_pool memcache_servers=memcached-0.memcached.watcher-kuttl-default.svc:11212 memcache_socket_timeout = 0.5 memcache_pool_connection_get_timeout = 1 enabled=true tls_enabled=true tls_certfile=/etc/pki/tls/certs/mtls.crt tls_keyfile=/etc/pki/tls/private/mtls.key tls_cafile=/etc/pki/tls/certs/mtls-ca.crt memcache_dead_retry = 30 ''[prometheus_client]'' host = metric-storage-prometheus.watcher-kuttl-default.svc port = 9090 cafile = /etc/pki/ca-trust/extracted/pem/prometheus/ca.crt ''[cinder_client]'' endpoint_type = internal ''[glance_client]'' endpoint_type = internal ''[ironic_client]'' endpoint_type = internal ''[keystone_client]'' interface = internal ''[neutron_client]'' endpoint_type = internal ''[nova_client]'' endpoint_type = internal ''[placement_client]'' interface = internal ''[watcher_cluster_data_model_collectors.compute]'' period = 900 ''[watcher_cluster_data_model_collectors.baremetal]'' period = 900 ''[watcher_cluster_data_model_collectors.storage]'' period = 900' - ' logger.go:42: 07:01:48 | watcher-tls/2-patch-mtls | + ''['' 1 == 1 '']''' - ' logger.go:42: 07:01:48 | watcher-tls/2-patch-mtls | ++ grep -c tls_keyfile=/etc/pki/tls/private/mtls.key' - ' logger.go:42: 07:01:48 | watcher-tls/2-patch-mtls | +++ oc rsh -c watcher-api watcher-kuttl-api-0 cat /etc/watcher/watcher.conf.d/00-default.conf' - ' logger.go:42: 07:01:48 | watcher-tls/2-patch-mtls | ++ echo ''[DEFAULT]'' state_path = /var/lib/watcher transport_url = ''rabbit://**********=1'' control_exchange = watcher debug = True log_file = /var/log/watcher/watcher-kuttl-api.log ''#'' empty notification_level means that no notification will be sent notification_level = ''[database]'' connection = ''mysql+pymysql://watcher_95d6:0107df271c530ebe460a742c5441b891@openstack.watcher-kuttl-default.svc/watcher?read_default_file=/etc/my.cnf'' ''[oslo_policy]'' policy_file = /etc/watcher/policy.yaml.sample ''[oslo_messaging_notifications]'' driver = noop ''[oslo_messaging_rabbit]'' rabbit_quorum_queue=true rabbit_transient_quorum_queue=true amqp_durable_queues=true ''[keystone_authtoken]'' memcached_servers=memcached-0.memcached.watcher-kuttl-default.svc:11212 tls_enabled=true memcache_tls_certfile = /etc/pki/tls/certs/mtls.crt memcache_tls_keyfile = /etc/pki/tls/private/mtls.key memcache_tls_cafile = /etc/pki/tls/certs/mtls-ca.crt memcache_tls_enabled = true project_domain_name = Default project_name = service user_domain_name = Default password = pa**********rd username = watcher auth_url = https://keystone-internal.watcher-kuttl-default.svc:5000 interface = internal auth_type = password cafile = /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem ''[watcher_clients_auth]'' project_domain_name = Default project_name = service user_domain_name = Default password = pa**********rd username = watcher auth_url = https://keystone-internal.watcher-kuttl-default.svc:5000 interface = internal auth_type = password cafile = /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem ''[oslo_concurrency]'' lock_path = /var/lib/watcher/tmp ''[watcher_datasources]'' datasources = prometheus ''[cache]'' backend = oslo_cache.memcache_pool memcache_servers=memcached-0.memcached.watcher-kuttl-default.svc:11212 memcache_socket_timeout = 0.5 memcache_pool_connection_get_timeout = 1 enabled=true tls_enabled=true tls_certfile=/etc/pki/tls/certs/mtls.crt tls_keyfile=/etc/pki/tls/private/mtls.key tls_cafile=/etc/pki/tls/certs/mtls-ca.crt memcache_dead_retry = 30 ''[prometheus_client]'' host = metric-storage-prometheus.watcher-kuttl-default.svc port = 9090 cafile = /etc/pki/ca-trust/extracted/pem/prometheus/ca.crt ''[cinder_client]'' endpoint_type = internal ''[glance_client]'' endpoint_type = internal ''[ironic_client]'' endpoint_type = internal ''[keystone_client]'' interface = internal ''[neutron_client]'' endpoint_type = internal ''[nova_client]'' endpoint_type = internal ''[placement_client]'' interface = internal ''[watcher_cluster_data_model_collectors.compute]'' period = 900 ''[watcher_cluster_data_model_collectors.baremetal]'' period = 900 ''[watcher_cluster_data_model_collectors.storage]'' period = 900' - ' logger.go:42: 07:01:48 | watcher-tls/2-patch-mtls | + ''['' 1 == 1 '']''' - ' logger.go:42: 07:01:48 | watcher-tls/2-patch-mtls | ++ grep -c tls_cafile=/etc/pki/tls/certs/mtls-ca.crt' - ' logger.go:42: 07:01:48 | watcher-tls/2-patch-mtls | +++ oc rsh -c watcher-api watcher-kuttl-api-0 cat /etc/watcher/watcher.conf.d/00-default.conf' - ' logger.go:42: 07:01:48 | watcher-tls/2-patch-mtls | ++ echo ''[DEFAULT]'' state_path = /var/lib/watcher transport_url = ''rabbit://**********=1'' control_exchange = watcher debug = True log_file = /var/log/watcher/watcher-kuttl-api.log ''#'' empty notification_level means that no notification will be sent notification_level = ''[database]'' connection = ''mysql+pymysql://watcher_95d6:0107df271c530ebe460a742c5441b891@openstack.watcher-kuttl-default.svc/watcher?read_default_file=/etc/my.cnf'' ''[oslo_policy]'' policy_file = /etc/watcher/policy.yaml.sample ''[oslo_messaging_notifications]'' driver = noop ''[oslo_messaging_rabbit]'' rabbit_quorum_queue=true rabbit_transient_quorum_queue=true amqp_durable_queues=true ''[keystone_authtoken]'' memcached_servers=memcached-0.memcached.watcher-kuttl-default.svc:11212 tls_enabled=true memcache_tls_certfile = /etc/pki/tls/certs/mtls.crt memcache_tls_keyfile = /etc/pki/tls/private/mtls.key memcache_tls_cafile = /etc/pki/tls/certs/mtls-ca.crt memcache_tls_enabled = true project_domain_name = Default project_name = service user_domain_name = Default password = pa**********rd username = watcher auth_url = https://keystone-internal.watcher-kuttl-default.svc:5000 interface = internal auth_type = password cafile = /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem ''[watcher_clients_auth]'' project_domain_name = Default project_name = service user_domain_name = Default password = pa**********rd username = watcher auth_url = https://keystone-internal.watcher-kuttl-default.svc:5000 interface = internal auth_type = password cafile = /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem ''[oslo_concurrency]'' lock_path = /var/lib/watcher/tmp ''[watcher_datasources]'' datasources = prometheus ''[cache]'' backend = oslo_cache.memcache_pool memcache_servers=memcached-0.memcached.watcher-kuttl-default.svc:11212 memcache_socket_timeout = 0.5 memcache_pool_connection_get_timeout = 1 enabled=true tls_enabled=true tls_certfile=/etc/pki/tls/certs/mtls.crt tls_keyfile=/etc/pki/tls/private/mtls.key tls_cafile=/etc/pki/tls/certs/mtls-ca.crt memcache_dead_retry = 30 ''[prometheus_client]'' host = metric-storage-prometheus.watcher-kuttl-default.svc port = 9090 cafile = /etc/pki/ca-trust/extracted/pem/prometheus/ca.crt ''[cinder_client]'' endpoint_type = internal ''[glance_client]'' endpoint_type = internal ''[ironic_client]'' endpoint_type = internal ''[keystone_client]'' interface = internal ''[neutron_client]'' endpoint_type = internal ''[nova_client]'' endpoint_type = internal ''[placement_client]'' interface = internal ''[watcher_cluster_data_model_collectors.compute]'' period = 900 ''[watcher_cluster_data_model_collectors.baremetal]'' period = 900 ''[watcher_cluster_data_model_collectors.storage]'' period = 900' - ' logger.go:42: 07:01:48 | watcher-tls/2-patch-mtls | + ''['' 1 == 1 '']''' - ' logger.go:42: 07:01:48 | watcher-tls/2-patch-mtls | + ''['' -n watcher-kuttl-applier-0 '']''' - ' logger.go:42: 07:01:48 | watcher-tls/2-patch-mtls | + echo ''Checking watcher-applier config...''' - ' logger.go:42: 07:01:48 | watcher-tls/2-patch-mtls | Checking watcher-applier config...' - ' logger.go:42: 07:01:48 | watcher-tls/2-patch-mtls | ++ grep -c ''memcache_tls_certfile = /etc/pki/tls/certs/mtls.crt ''' - ' logger.go:42: 07:01:48 | watcher-tls/2-patch-mtls | +++ oc rsh -c watcher-applier watcher-kuttl-applier-0 cat /etc/watcher/watcher.conf.d/00-default.conf' - ' logger.go:42: 07:01:48 | watcher-tls/2-patch-mtls | ++ echo ''[DEFAULT]'' state_path = /var/lib/watcher transport_url = ''rabbit://**********=1'' control_exchange = watcher debug = True ''#'' empty notification_level means that no notification will be sent notification_level = ''[database]'' connection = ''mysql+pymysql://watcher_95d6:0107df271c530ebe460a742c5441b891@openstack.watcher-kuttl-default.svc/watcher?read_default_file=/etc/my.cnf'' ''[oslo_policy]'' policy_file = /etc/watcher/policy.yaml.sample ''[oslo_messaging_notifications]'' driver = noop ''[oslo_messaging_rabbit]'' rabbit_quorum_queue=true rabbit_transient_quorum_queue=true amqp_durable_queues=true ''[keystone_authtoken]'' memcached_servers=memcached-0.memcached.watcher-kuttl-default.svc:11212 tls_enabled=true memcache_tls_certfile = /etc/pki/tls/certs/mtls.crt memcache_tls_keyfile = /etc/pki/tls/private/mtls.key memcache_tls_cafile = /etc/pki/tls/certs/mtls-ca.crt memcache_tls_enabled = true project_domain_name = Default project_name = service user_domain_name = Default password = pa**********rd username = watcher auth_url = https://keystone-internal.watcher-kuttl-default.svc:5000 interface = internal auth_type = password cafile = /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem ''[watcher_clients_auth]'' project_domain_name = Default project_name = service user_domain_name = Default password = pa**********rd username = watcher auth_url = https://keystone-internal.watcher-kuttl-default.svc:5000 interface = internal auth_type = password cafile = /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem ''[oslo_concurrency]'' lock_path = /var/lib/watcher/tmp ''[watcher_datasources]'' datasources = prometheus ''[cache]'' backend = oslo_cache.memcache_pool memcache_servers=memcached-0.memcached.watcher-kuttl-default.svc:11212 memcache_socket_timeout = 0.5 memcache_pool_connection_get_timeout = 1 enabled=true tls_enabled=true tls_certfile=/etc/pki/tls/certs/mtls.crt tls_keyfile=/etc/pki/tls/private/mtls.key tls_cafile=/etc/pki/tls/certs/mtls-ca.crt memcache_dead_retry = 30 ''[cinder_client]'' endpoint_type = internal ''[glance_client]'' endpoint_type = internal ''[ironic_client]'' endpoint_type = internal ''[keystone_client]'' interface = internal ''[neutron_client]'' endpoint_type = internal ''[nova_client]'' endpoint_type = internal ''[placement_client]'' interface = internal ''[watcher_cluster_data_model_collectors.compute]'' period = 900 ''[watcher_cluster_data_model_collectors.baremetal]'' period = 900 ''[watcher_cluster_data_model_collectors.storage]'' period = 900' - ' logger.go:42: 07:01:48 | watcher-tls/2-patch-mtls | + ''['' 1 == 1 '']''' - ' logger.go:42: 07:01:48 | watcher-tls/2-patch-mtls | ++ grep -c ''memcache_tls_keyfile = /etc/pki/tls/private/mtls.key''' - ' logger.go:42: 07:01:48 | watcher-tls/2-patch-mtls | +++ oc rsh -c watcher-applier watcher-kuttl-applier-0 cat /etc/watcher/watcher.conf.d/00-default.conf' - ' logger.go:42: 07:01:49 | watcher-tls/2-patch-mtls | ++ echo ''[DEFAULT]'' state_path = /var/lib/watcher transport_url = ''rabbit://**********=1'' control_exchange = watcher debug = True ''#'' empty notification_level means that no notification will be sent notification_level = ''[database]'' connection = ''mysql+pymysql://watcher_95d6:0107df271c530ebe460a742c5441b891@openstack.watcher-kuttl-default.svc/watcher?read_default_file=/etc/my.cnf'' ''[oslo_policy]'' policy_file = /etc/watcher/policy.yaml.sample ''[oslo_messaging_notifications]'' driver = noop ''[oslo_messaging_rabbit]'' rabbit_quorum_queue=true rabbit_transient_quorum_queue=true amqp_durable_queues=true ''[keystone_authtoken]'' memcached_servers=memcached-0.memcached.watcher-kuttl-default.svc:11212 tls_enabled=true memcache_tls_certfile = /etc/pki/tls/certs/mtls.crt memcache_tls_keyfile = /etc/pki/tls/private/mtls.key memcache_tls_cafile = /etc/pki/tls/certs/mtls-ca.crt memcache_tls_enabled = true project_domain_name = Default project_name = service user_domain_name = Default password = pa**********rd username = watcher auth_url = https://keystone-internal.watcher-kuttl-default.svc:5000 interface = internal auth_type = password cafile = /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem ''[watcher_clients_auth]'' project_domain_name = Default project_name = service user_domain_name = Default password = pa**********rd username = watcher auth_url = https://keystone-internal.watcher-kuttl-default.svc:5000 interface = internal auth_type = password cafile = /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem ''[oslo_concurrency]'' lock_path = /var/lib/watcher/tmp ''[watcher_datasources]'' datasources = prometheus ''[cache]'' backend = oslo_cache.memcache_pool memcache_servers=memcached-0.memcached.watcher-kuttl-default.svc:11212 memcache_socket_timeout = 0.5 memcache_pool_connection_get_timeout = 1 enabled=true tls_enabled=true tls_certfile=/etc/pki/tls/certs/mtls.crt tls_keyfile=/etc/pki/tls/private/mtls.key tls_cafile=/etc/pki/tls/certs/mtls-ca.crt memcache_dead_retry = 30 ''[cinder_client]'' endpoint_type = internal ''[glance_client]'' endpoint_type = internal ''[ironic_client]'' endpoint_type = internal ''[keystone_client]'' interface = internal ''[neutron_client]'' endpoint_type = internal ''[nova_client]'' endpoint_type = internal ''[placement_client]'' interface = internal ''[watcher_cluster_data_model_collectors.compute]'' period = 900 ''[watcher_cluster_data_model_collectors.baremetal]'' period = 900 ''[watcher_cluster_data_model_collectors.storage]'' period = 900' - ' logger.go:42: 07:01:49 | watcher-tls/2-patch-mtls | + ''['' 1 == 1 '']''' - ' logger.go:42: 07:01:49 | watcher-tls/2-patch-mtls | ++ grep -c ''memcache_tls_cafile = /etc/pki/tls/certs/mtls-ca.crt''' - ' logger.go:42: 07:01:49 | watcher-tls/2-patch-mtls | +++ oc rsh -c watcher-applier watcher-kuttl-applier-0 cat /etc/watcher/watcher.conf.d/00-default.conf' - ' logger.go:42: 07:01:49 | watcher-tls/2-patch-mtls | ++ echo ''[DEFAULT]'' state_path = /var/lib/watcher transport_url = ''rabbit://**********=1'' control_exchange = watcher debug = True ''#'' empty notification_level means that no notification will be sent notification_level = ''[database]'' connection = ''mysql+pymysql://watcher_95d6:0107df271c530ebe460a742c5441b891@openstack.watcher-kuttl-default.svc/watcher?read_default_file=/etc/my.cnf'' ''[oslo_policy]'' policy_file = /etc/watcher/policy.yaml.sample ''[oslo_messaging_notifications]'' driver = noop ''[oslo_messaging_rabbit]'' rabbit_quorum_queue=true rabbit_transient_quorum_queue=true amqp_durable_queues=true ''[keystone_authtoken]'' memcached_servers=memcached-0.memcached.watcher-kuttl-default.svc:11212 tls_enabled=true memcache_tls_certfile = /etc/pki/tls/certs/mtls.crt memcache_tls_keyfile = /etc/pki/tls/private/mtls.key memcache_tls_cafile = /etc/pki/tls/certs/mtls-ca.crt memcache_tls_enabled = true project_domain_name = Default project_name = service user_domain_name = Default password = pa**********rd username = watcher auth_url = https://keystone-internal.watcher-kuttl-default.svc:5000 interface = internal auth_type = password cafile = /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem ''[watcher_clients_auth]'' project_domain_name = Default project_name = service user_domain_name = Default password = pa**********rd username = watcher auth_url = https://keystone-internal.watcher-kuttl-default.svc:5000 interface = internal auth_type = password cafile = /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem ''[oslo_concurrency]'' lock_path = /var/lib/watcher/tmp ''[watcher_datasources]'' datasources = prometheus ''[cache]'' backend = oslo_cache.memcache_pool memcache_servers=memcached-0.memcached.watcher-kuttl-default.svc:11212 memcache_socket_timeout = 0.5 memcache_pool_connection_get_timeout = 1 enabled=true tls_enabled=true tls_certfile=/etc/pki/tls/certs/mtls.crt tls_keyfile=/etc/pki/tls/private/mtls.key tls_cafile=/etc/pki/tls/certs/mtls-ca.crt memcache_dead_retry = 30 ''[cinder_client]'' endpoint_type = internal ''[glance_client]'' endpoint_type = internal ''[ironic_client]'' endpoint_type = internal ''[keystone_client]'' interface = internal ''[neutron_client]'' endpoint_type = internal ''[nova_client]'' endpoint_type = internal ''[placement_client]'' interface = internal ''[watcher_cluster_data_model_collectors.compute]'' period = 900 ''[watcher_cluster_data_model_collectors.baremetal]'' period = 900 ''[watcher_cluster_data_model_collectors.storage]'' period = 900' - ' logger.go:42: 07:01:49 | watcher-tls/2-patch-mtls | + ''['' 1 == 1 '']''' - ' logger.go:42: 07:01:49 | watcher-tls/2-patch-mtls | ++ grep -c ''memcache_tls_enabled = true''' - ' logger.go:42: 07:01:49 | watcher-tls/2-patch-mtls | +++ oc rsh -c watcher-applier watcher-kuttl-applier-0 cat /etc/watcher/watcher.conf.d/00-default.conf' - ' logger.go:42: 07:01:49 | watcher-tls/2-patch-mtls | ++ echo ''[DEFAULT]'' state_path = /var/lib/watcher transport_url = ''rabbit://**********=1'' control_exchange = watcher debug = True ''#'' empty notification_level means that no notification will be sent notification_level = ''[database]'' connection = ''mysql+pymysql://watcher_95d6:0107df271c530ebe460a742c5441b891@openstack.watcher-kuttl-default.svc/watcher?read_default_file=/etc/my.cnf'' ''[oslo_policy]'' policy_file = /etc/watcher/policy.yaml.sample ''[oslo_messaging_notifications]'' driver = noop ''[oslo_messaging_rabbit]'' rabbit_quorum_queue=true rabbit_transient_quorum_queue=true amqp_durable_queues=true ''[keystone_authtoken]'' memcached_servers=memcached-0.memcached.watcher-kuttl-default.svc:11212 tls_enabled=true memcache_tls_certfile = /etc/pki/tls/certs/mtls.crt memcache_tls_keyfile = /etc/pki/tls/private/mtls.key memcache_tls_cafile = /etc/pki/tls/certs/mtls-ca.crt memcache_tls_enabled = true project_domain_name = Default project_name = service user_domain_name = Default password = pa**********rd username = watcher auth_url = https://keystone-internal.watcher-kuttl-default.svc:5000 interface = internal auth_type = password cafile = /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem ''[watcher_clients_auth]'' project_domain_name = Default project_name = service user_domain_name = Default password = pa**********rd username = watcher auth_url = https://keystone-internal.watcher-kuttl-default.svc:5000 interface = internal auth_type = password cafile = /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem ''[oslo_concurrency]'' lock_path = /var/lib/watcher/tmp ''[watcher_datasources]'' datasources = prometheus ''[cache]'' backend = oslo_cache.memcache_pool memcache_servers=memcached-0.memcached.watcher-kuttl-default.svc:11212 memcache_socket_timeout = 0.5 memcache_pool_connection_get_timeout = 1 enabled=true tls_enabled=true tls_certfile=/etc/pki/tls/certs/mtls.crt tls_keyfile=/etc/pki/tls/private/mtls.key tls_cafile=/etc/pki/tls/certs/mtls-ca.crt memcache_dead_retry = 30 ''[cinder_client]'' endpoint_type = internal ''[glance_client]'' endpoint_type = internal ''[ironic_client]'' endpoint_type = internal ''[keystone_client]'' interface = internal ''[neutron_client]'' endpoint_type = internal ''[nova_client]'' endpoint_type = internal ''[placement_client]'' interface = internal ''[watcher_cluster_data_model_collectors.compute]'' period = 900 ''[watcher_cluster_data_model_collectors.baremetal]'' period = 900 ''[watcher_cluster_data_model_collectors.storage]'' period = 900' - ' logger.go:42: 07:01:49 | watcher-tls/2-patch-mtls | + ''['' 1 == 1 '']''' - ' logger.go:42: 07:01:49 | watcher-tls/2-patch-mtls | ++ grep -c tls_certfile=/etc/pki/tls/certs/mtls.crt' - ' logger.go:42: 07:01:49 | watcher-tls/2-patch-mtls | +++ oc rsh -c watcher-applier watcher-kuttl-applier-0 cat /etc/watcher/watcher.conf.d/00-default.conf' - ' logger.go:42: 07:01:49 | watcher-tls/2-patch-mtls | ++ echo ''[DEFAULT]'' state_path = /var/lib/watcher transport_url = ''rabbit://**********=1'' control_exchange = watcher debug = True ''#'' empty notification_level means that no notification will be sent notification_level = ''[database]'' connection = ''mysql+pymysql://watcher_95d6:0107df271c530ebe460a742c5441b891@openstack.watcher-kuttl-default.svc/watcher?read_default_file=/etc/my.cnf'' ''[oslo_policy]'' policy_file = /etc/watcher/policy.yaml.sample ''[oslo_messaging_notifications]'' driver = noop ''[oslo_messaging_rabbit]'' rabbit_quorum_queue=true rabbit_transient_quorum_queue=true amqp_durable_queues=true ''[keystone_authtoken]'' memcached_servers=memcached-0.memcached.watcher-kuttl-default.svc:11212 tls_enabled=true memcache_tls_certfile = /etc/pki/tls/certs/mtls.crt memcache_tls_keyfile = /etc/pki/tls/private/mtls.key memcache_tls_cafile = /etc/pki/tls/certs/mtls-ca.crt memcache_tls_enabled = true project_domain_name = Default project_name = service user_domain_name = Default password = pa**********rd username = watcher auth_url = https://keystone-internal.watcher-kuttl-default.svc:5000 interface = internal auth_type = password cafile = /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem ''[watcher_clients_auth]'' project_domain_name = Default project_name = service user_domain_name = Default password = pa**********rd username = watcher auth_url = https://keystone-internal.watcher-kuttl-default.svc:5000 interface = internal auth_type = password cafile = /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem ''[oslo_concurrency]'' lock_path = /var/lib/watcher/tmp ''[watcher_datasources]'' datasources = prometheus ''[cache]'' backend = oslo_cache.memcache_pool memcache_servers=memcached-0.memcached.watcher-kuttl-default.svc:11212 memcache_socket_timeout = 0.5 memcache_pool_connection_get_timeout = 1 enabled=true tls_enabled=true tls_certfile=/etc/pki/tls/certs/mtls.crt tls_keyfile=/etc/pki/tls/private/mtls.key tls_cafile=/etc/pki/tls/certs/mtls-ca.crt memcache_dead_retry = 30 ''[cinder_client]'' endpoint_type = internal ''[glance_client]'' endpoint_type = internal ''[ironic_client]'' endpoint_type = internal ''[keystone_client]'' interface = internal ''[neutron_client]'' endpoint_type = internal ''[nova_client]'' endpoint_type = internal ''[placement_client]'' interface = internal ''[watcher_cluster_data_model_collectors.compute]'' period = 900 ''[watcher_cluster_data_model_collectors.baremetal]'' period = 900 ''[watcher_cluster_data_model_collectors.storage]'' period = 900' - ' logger.go:42: 07:01:49 | watcher-tls/2-patch-mtls | + ''['' 1 == 1 '']''' - ' logger.go:42: 07:01:49 | watcher-tls/2-patch-mtls | ++ grep -c tls_keyfile=/etc/pki/tls/private/mtls.key' - ' logger.go:42: 07:01:49 | watcher-tls/2-patch-mtls | +++ oc rsh -c watcher-applier watcher-kuttl-applier-0 cat /etc/watcher/watcher.conf.d/00-default.conf' - ' logger.go:42: 07:01:50 | watcher-tls/2-patch-mtls | ++ echo ''[DEFAULT]'' state_path = /var/lib/watcher transport_url = ''rabbit://**********=1'' control_exchange = watcher debug = True ''#'' empty notification_level means that no notification will be sent notification_level = ''[database]'' connection = ''mysql+pymysql://watcher_95d6:0107df271c530ebe460a742c5441b891@openstack.watcher-kuttl-default.svc/watcher?read_default_file=/etc/my.cnf'' ''[oslo_policy]'' policy_file = /etc/watcher/policy.yaml.sample ''[oslo_messaging_notifications]'' driver = noop ''[oslo_messaging_rabbit]'' rabbit_quorum_queue=true rabbit_transient_quorum_queue=true amqp_durable_queues=true ''[keystone_authtoken]'' memcached_servers=memcached-0.memcached.watcher-kuttl-default.svc:11212 tls_enabled=true memcache_tls_certfile = /etc/pki/tls/certs/mtls.crt memcache_tls_keyfile = /etc/pki/tls/private/mtls.key memcache_tls_cafile = /etc/pki/tls/certs/mtls-ca.crt memcache_tls_enabled = true project_domain_name = Default project_name = service user_domain_name = Default password = pa**********rd username = watcher auth_url = https://keystone-internal.watcher-kuttl-default.svc:5000 interface = internal auth_type = password cafile = /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem ''[watcher_clients_auth]'' project_domain_name = Default project_name = service user_domain_name = Default password = pa**********rd username = watcher auth_url = https://keystone-internal.watcher-kuttl-default.svc:5000 interface = internal auth_type = password cafile = /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem ''[oslo_concurrency]'' lock_path = /var/lib/watcher/tmp ''[watcher_datasources]'' datasources = prometheus ''[cache]'' backend = oslo_cache.memcache_pool memcache_servers=memcached-0.memcached.watcher-kuttl-default.svc:11212 memcache_socket_timeout = 0.5 memcache_pool_connection_get_timeout = 1 enabled=true tls_enabled=true tls_certfile=/etc/pki/tls/certs/mtls.crt tls_keyfile=/etc/pki/tls/private/mtls.key tls_cafile=/etc/pki/tls/certs/mtls-ca.crt memcache_dead_retry = 30 ''[cinder_client]'' endpoint_type = internal ''[glance_client]'' endpoint_type = internal ''[ironic_client]'' endpoint_type = internal ''[keystone_client]'' interface = internal ''[neutron_client]'' endpoint_type = internal ''[nova_client]'' endpoint_type = internal ''[placement_client]'' interface = internal ''[watcher_cluster_data_model_collectors.compute]'' period = 900 ''[watcher_cluster_data_model_collectors.baremetal]'' period = 900 ''[watcher_cluster_data_model_collectors.storage]'' period = 900' - ' logger.go:42: 07:01:50 | watcher-tls/2-patch-mtls | + ''['' 1 == 1 '']''' - ' logger.go:42: 07:01:50 | watcher-tls/2-patch-mtls | ++ grep -c tls_cafile=/etc/pki/tls/certs/mtls-ca.crt' - ' logger.go:42: 07:01:50 | watcher-tls/2-patch-mtls | +++ oc rsh -c watcher-applier watcher-kuttl-applier-0 cat /etc/watcher/watcher.conf.d/00-default.conf' - ' logger.go:42: 07:01:50 | watcher-tls/2-patch-mtls | ++ echo ''[DEFAULT]'' state_path = /var/lib/watcher transport_url = ''rabbit://**********=1'' control_exchange = watcher debug = True ''#'' empty notification_level means that no notification will be sent notification_level = ''[database]'' connection = ''mysql+pymysql://watcher_95d6:0107df271c530ebe460a742c5441b891@openstack.watcher-kuttl-default.svc/watcher?read_default_file=/etc/my.cnf'' ''[oslo_policy]'' policy_file = /etc/watcher/policy.yaml.sample ''[oslo_messaging_notifications]'' driver = noop ''[oslo_messaging_rabbit]'' rabbit_quorum_queue=true rabbit_transient_quorum_queue=true amqp_durable_queues=true ''[keystone_authtoken]'' memcached_servers=memcached-0.memcached.watcher-kuttl-default.svc:11212 tls_enabled=true memcache_tls_certfile = /etc/pki/tls/certs/mtls.crt memcache_tls_keyfile = /etc/pki/tls/private/mtls.key memcache_tls_cafile = /etc/pki/tls/certs/mtls-ca.crt memcache_tls_enabled = true project_domain_name = Default project_name = service user_domain_name = Default password = pa**********rd username = watcher auth_url = https://keystone-internal.watcher-kuttl-default.svc:5000 interface = internal auth_type = password cafile = /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem ''[watcher_clients_auth]'' project_domain_name = Default project_name = service user_domain_name = Default password = pa**********rd username = watcher auth_url = https://keystone-internal.watcher-kuttl-default.svc:5000 interface = internal auth_type = password cafile = /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem ''[oslo_concurrency]'' lock_path = /var/lib/watcher/tmp ''[watcher_datasources]'' datasources = prometheus ''[cache]'' backend = oslo_cache.memcache_pool memcache_servers=memcached-0.memcached.watcher-kuttl-default.svc:11212 memcache_socket_timeout = 0.5 memcache_pool_connection_get_timeout = 1 enabled=true tls_enabled=true tls_certfile=/etc/pki/tls/certs/mtls.crt tls_keyfile=/etc/pki/tls/private/mtls.key tls_cafile=/etc/pki/tls/certs/mtls-ca.crt memcache_dead_retry = 30 ''[cinder_client]'' endpoint_type = internal ''[glance_client]'' endpoint_type = internal ''[ironic_client]'' endpoint_type = internal ''[keystone_client]'' interface = internal ''[neutron_client]'' endpoint_type = internal ''[nova_client]'' endpoint_type = internal ''[placement_client]'' interface = internal ''[watcher_cluster_data_model_collectors.compute]'' period = 900 ''[watcher_cluster_data_model_collectors.baremetal]'' period = 900 ''[watcher_cluster_data_model_collectors.storage]'' period = 900' - ' logger.go:42: 07:01:50 | watcher-tls/2-patch-mtls | + ''['' 1 == 1 '']''' - ' logger.go:42: 07:01:50 | watcher-tls/2-patch-mtls | + ''['' -n watcher-kuttl-decision-engine-0 '']''' - ' logger.go:42: 07:01:50 | watcher-tls/2-patch-mtls | + echo ''Checking watcher-decision-engine config...''' - ' logger.go:42: 07:01:50 | watcher-tls/2-patch-mtls | Checking watcher-decision-engine config...' - ' logger.go:42: 07:01:50 | watcher-tls/2-patch-mtls | ++ grep -c ''memcache_tls_certfile = /etc/pki/tls/certs/mtls.crt ''' - ' logger.go:42: 07:01:50 | watcher-tls/2-patch-mtls | +++ oc rsh -c watcher-decision-engine watcher-kuttl-decision-engine-0 cat /etc/watcher/watcher.conf.d/00-default.conf' - ' logger.go:42: 07:01:50 | watcher-tls/2-patch-mtls | error: unable to upgrade connection: container not found ("watcher-decision-engine")' - ' logger.go:42: 07:01:50 | watcher-tls/2-patch-mtls | ++ echo' - ' logger.go:42: 07:01:50 | watcher-tls/2-patch-mtls | + ''['' 0 == 1 '']''' - ' logger.go:42: 07:01:51 | watcher-tls/2-patch-mtls | running command: [sh -c set -euxo pipefail' - ' ' - ' oc project ${NAMESPACE}' - ' # Get pod names for each watcher service' - ' APIPOD=$(oc get pods -l service=watcher-api -o jsonpath=''{.items[0].metadata.name}'')' - ' APPLIERPOD=$(oc get pods -l service=watcher-applier -o jsonpath=''{.items[0].metadata.name}'')' - ' DECISIONENGINEPOD=$(oc get pods -l service=watcher-decision-engine -o jsonpath=''{.items[0].metadata.name}'')' - ' ' - ' # Verify memcached mTLS config parameters in watcher-api config' - ' if [ -n "${APIPOD}" ]; then' - ' echo "Checking watcher-api config..."' - ' [ $(echo $(oc rsh -c watcher-api $APIPOD cat /etc/watcher/watcher.conf.d/00-default.conf) | grep -c "memcache_tls_certfile = /etc/pki/tls/certs/mtls.crt ") == 1 ]' - ' [ $(echo $(oc rsh -c watcher-api $APIPOD cat /etc/watcher/watcher.conf.d/00-default.conf) | grep -c "memcache_tls_keyfile = /etc/pki/tls/private/mtls.key") == 1 ]' - ' [ $(echo $(oc rsh -c watcher-api $APIPOD cat /etc/watcher/watcher.conf.d/00-default.conf) | grep -c "memcache_tls_cafile = /etc/pki/tls/certs/mtls-ca.crt") == 1 ]' - ' [ $(echo $(oc rsh -c watcher-api $APIPOD cat /etc/watcher/watcher.conf.d/00-default.conf) | grep -c "memcache_tls_enabled = true") == 1 ]' - ' ' - ' # Verify mTLS config parameters in memcached backend config' - ' [ $(echo $(oc rsh -c watcher-api $APIPOD cat /etc/watcher/watcher.conf.d/00-default.conf) | grep -c "tls_certfile=/etc/pki/tls/certs/mtls.crt") == 1 ]' - ' [ $(echo $(oc rsh -c watcher-api $APIPOD cat /etc/watcher/watcher.conf.d/00-default.conf) | grep -c "tls_keyfile=/etc/pki/tls/private/mtls.key") == 1 ]' - ' [ $(echo $(oc rsh -c watcher-api $APIPOD cat /etc/watcher/watcher.conf.d/00-default.conf) | grep -c "tls_cafile=/etc/pki/tls/certs/mtls-ca.crt") == 1 ]' - ' else' - ' exit 1' - ' fi' - ' ' - ' # Verify memcached mTLS config parameters in watcher-applier config' - ' if [ -n "${APPLIERPOD}" ]; then' - ' echo "Checking watcher-applier config..."' - ' [ $(echo $(oc rsh -c watcher-applier $APPLIERPOD cat /etc/watcher/watcher.conf.d/00-default.conf) | grep -c "memcache_tls_certfile = /etc/pki/tls/certs/mtls.crt ") == 1 ]' - ' [ $(echo $(oc rsh -c watcher-applier $APPLIERPOD cat /etc/watcher/watcher.conf.d/00-default.conf) | grep -c "memcache_tls_keyfile = /etc/pki/tls/private/mtls.key") == 1 ]' - ' [ $(echo $(oc rsh -c watcher-applier $APPLIERPOD cat /etc/watcher/watcher.conf.d/00-default.conf) | grep -c "memcache_tls_cafile = /etc/pki/tls/certs/mtls-ca.crt") == 1 ]' - ' [ $(echo $(oc rsh -c watcher-applier $APPLIERPOD cat /etc/watcher/watcher.conf.d/00-default.conf) | grep -c "memcache_tls_enabled = true") == 1 ]' - ' ' - ' [ $(echo $(oc rsh -c watcher-applier $APPLIERPOD cat /etc/watcher/watcher.conf.d/00-default.conf) | grep -c "tls_certfile=/etc/pki/tls/certs/mtls.crt") == 1 ]' - ' [ $(echo $(oc rsh -c watcher-applier $APPLIERPOD cat /etc/watcher/watcher.conf.d/00-default.conf) | grep -c "tls_keyfile=/etc/pki/tls/private/mtls.key") == 1 ]' - ' [ $(echo $(oc rsh -c watcher-applier $APPLIERPOD cat /etc/watcher/watcher.conf.d/00-default.conf) | grep -c "tls_cafile=/etc/pki/tls/certs/mtls-ca.crt") == 1 ]' - ' else' - ' exit 1' - ' fi' - ' ' - ' # Verify memcached mTLS config parameters in watcher-decision-engine config' - ' if [ -n "${DECISIONENGINEPOD}" ]; then' - ' echo "Checking watcher-decision-engine config..."' - ' [ $(echo $(oc rsh -c watcher-decision-engine $DECISIONENGINEPOD cat /etc/watcher/watcher.conf.d/00-default.conf) | grep -c "memcache_tls_certfile = /etc/pki/tls/certs/mtls.crt ") == 1 ]' - ' [ $(echo $(oc rsh -c watcher-decision-engine $DECISIONENGINEPOD cat /etc/watcher/watcher.conf.d/00-default.conf) | grep -c "memcache_tls_keyfile = /etc/pki/tls/private/mtls.key") == 1 ]' - ' [ $(echo $(oc rsh -c watcher-decision-engine $DECISIONENGINEPOD cat /etc/watcher/watcher.conf.d/00-default.conf) | grep -c "memcache_tls_cafile = /etc/pki/tls/certs/mtls-ca.crt") == 1 ]' - ' [ $(echo $(oc rsh -c watcher-decision-engine $DECISIONENGINEPOD cat /etc/watcher/watcher.conf.d/00-default.conf) | grep -c "memcache_tls_enabled = true") == 1 ]' - ' ' - ' [ $(echo $(oc rsh -c watcher-decision-engine $DECISIONENGINEPOD cat /etc/watcher/watcher.conf.d/00-default.conf) | grep -c "tls_certfile=/etc/pki/tls/certs/mtls.crt") == 1 ]' - ' [ $(echo $(oc rsh -c watcher-decision-engine $DECISIONENGINEPOD cat /etc/watcher/watcher.conf.d/00-default.conf) | grep -c "tls_keyfile=/etc/pki/tls/private/mtls.key") == 1 ]' - ' [ $(echo $(oc rsh -c watcher-decision-engine $DECISIONENGINEPOD cat /etc/watcher/watcher.conf.d/00-default.conf) | grep -c "tls_cafile=/etc/pki/tls/certs/mtls-ca.crt") == 1 ]' - ' else' - ' exit 1' - ' fi' - ' ]' - ' logger.go:42: 07:01:51 | watcher-tls/2-patch-mtls | + oc project watcher-kuttl-default' - ' logger.go:42: 07:01:51 | watcher-tls/2-patch-mtls | Already on project "watcher-kuttl-default" on server "https://api.crc.testing:6443".' - ' logger.go:42: 07:01:51 | watcher-tls/2-patch-mtls | ++ oc get pods -l service=watcher-api -o ''jsonpath={.items[0].metadata.name}''' - ' logger.go:42: 07:01:52 | watcher-tls/2-patch-mtls | + APIPOD=watcher-kuttl-api-0' - ' logger.go:42: 07:01:52 | watcher-tls/2-patch-mtls | ++ oc get pods -l service=watcher-applier -o ''jsonpath={.items[0].metadata.name}''' - ' logger.go:42: 07:01:52 | watcher-tls/2-patch-mtls | + APPLIERPOD=watcher-kuttl-applier-0' - ' logger.go:42: 07:01:52 | watcher-tls/2-patch-mtls | ++ oc get pods -l service=watcher-decision-engine -o ''jsonpath={.items[0].metadata.name}''' - ' logger.go:42: 07:01:52 | watcher-tls/2-patch-mtls | + DECISIONENGINEPOD=watcher-kuttl-decision-engine-0' - ' logger.go:42: 07:01:52 | watcher-tls/2-patch-mtls | + ''['' -n watcher-kuttl-api-0 '']''' - ' logger.go:42: 07:01:52 | watcher-tls/2-patch-mtls | + echo ''Checking watcher-api config...''' - ' logger.go:42: 07:01:52 | watcher-tls/2-patch-mtls | Checking watcher-api config...' - ' logger.go:42: 07:01:52 | watcher-tls/2-patch-mtls | ++ grep -c ''memcache_tls_certfile = /etc/pki/tls/certs/mtls.crt ''' - ' logger.go:42: 07:01:52 | watcher-tls/2-patch-mtls | +++ oc rsh -c watcher-api watcher-kuttl-api-0 cat /etc/watcher/watcher.conf.d/00-default.conf' - ' logger.go:42: 07:01:52 | watcher-tls/2-patch-mtls | ++ echo ''[DEFAULT]'' state_path = /var/lib/watcher transport_url = ''rabbit://**********=1'' control_exchange = watcher debug = True log_file = /var/log/watcher/watcher-kuttl-api.log ''#'' empty notification_level means that no notification will be sent notification_level = ''[database]'' connection = ''mysql+pymysql://watcher_95d6:0107df271c530ebe460a742c5441b891@openstack.watcher-kuttl-default.svc/watcher?read_default_file=/etc/my.cnf'' ''[oslo_policy]'' policy_file = /etc/watcher/policy.yaml.sample ''[oslo_messaging_notifications]'' driver = noop ''[oslo_messaging_rabbit]'' rabbit_quorum_queue=true rabbit_transient_quorum_queue=true amqp_durable_queues=true ''[keystone_authtoken]'' memcached_servers=memcached-0.memcached.watcher-kuttl-default.svc:11212 tls_enabled=true memcache_tls_certfile = /etc/pki/tls/certs/mtls.crt memcache_tls_keyfile = /etc/pki/tls/private/mtls.key memcache_tls_cafile = /etc/pki/tls/certs/mtls-ca.crt memcache_tls_enabled = true project_domain_name = Default project_name = service user_domain_name = Default password = pa**********rd username = watcher auth_url = https://keystone-internal.watcher-kuttl-default.svc:5000 interface = internal auth_type = password cafile = /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem ''[watcher_clients_auth]'' project_domain_name = Default project_name = service user_domain_name = Default password = pa**********rd username = watcher auth_url = https://keystone-internal.watcher-kuttl-default.svc:5000 interface = internal auth_type = password cafile = /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem ''[oslo_concurrency]'' lock_path = /var/lib/watcher/tmp ''[watcher_datasources]'' datasources = prometheus ''[cache]'' backend = oslo_cache.memcache_pool memcache_servers=memcached-0.memcached.watcher-kuttl-default.svc:11212 memcache_socket_timeout = 0.5 memcache_pool_connection_get_timeout = 1 enabled=true tls_enabled=true tls_certfile=/etc/pki/tls/certs/mtls.crt tls_keyfile=/etc/pki/tls/private/mtls.key tls_cafile=/etc/pki/tls/certs/mtls-ca.crt memcache_dead_retry = 30 ''[prometheus_client]'' host = metric-storage-prometheus.watcher-kuttl-default.svc port = 9090 cafile = /etc/pki/ca-trust/extracted/pem/prometheus/ca.crt ''[cinder_client]'' endpoint_type = internal ''[glance_client]'' endpoint_type = internal ''[ironic_client]'' endpoint_type = internal ''[keystone_client]'' interface = internal ''[neutron_client]'' endpoint_type = internal ''[nova_client]'' endpoint_type = internal ''[placement_client]'' interface = internal ''[watcher_cluster_data_model_collectors.compute]'' period = 900 ''[watcher_cluster_data_model_collectors.baremetal]'' period = 900 ''[watcher_cluster_data_model_collectors.storage]'' period = 900' - ' logger.go:42: 07:01:52 | watcher-tls/2-patch-mtls | + ''['' 1 == 1 '']''' - ' logger.go:42: 07:01:52 | watcher-tls/2-patch-mtls | ++ grep -c ''memcache_tls_keyfile = /etc/pki/tls/private/mtls.key''' - ' logger.go:42: 07:01:52 | watcher-tls/2-patch-mtls | +++ oc rsh -c watcher-api watcher-kuttl-api-0 cat /etc/watcher/watcher.conf.d/00-default.conf' - ' logger.go:42: 07:01:52 | watcher-tls/2-patch-mtls | ++ echo ''[DEFAULT]'' state_path = /var/lib/watcher transport_url = ''rabbit://**********=1'' control_exchange = watcher debug = True log_file = /var/log/watcher/watcher-kuttl-api.log ''#'' empty notification_level means that no notification will be sent notification_level = ''[database]'' connection = ''mysql+pymysql://watcher_95d6:0107df271c530ebe460a742c5441b891@openstack.watcher-kuttl-default.svc/watcher?read_default_file=/etc/my.cnf'' ''[oslo_policy]'' policy_file = /etc/watcher/policy.yaml.sample ''[oslo_messaging_notifications]'' driver = noop ''[oslo_messaging_rabbit]'' rabbit_quorum_queue=true rabbit_transient_quorum_queue=true amqp_durable_queues=true ''[keystone_authtoken]'' memcached_servers=memcached-0.memcached.watcher-kuttl-default.svc:11212 tls_enabled=true memcache_tls_certfile = /etc/pki/tls/certs/mtls.crt memcache_tls_keyfile = /etc/pki/tls/private/mtls.key memcache_tls_cafile = /etc/pki/tls/certs/mtls-ca.crt memcache_tls_enabled = true project_domain_name = Default project_name = service user_domain_name = Default password = pa**********rd username = watcher auth_url = https://keystone-internal.watcher-kuttl-default.svc:5000 interface = internal auth_type = password cafile = /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem ''[watcher_clients_auth]'' project_domain_name = Default project_name = service user_domain_name = Default password = pa**********rd username = watcher auth_url = https://keystone-internal.watcher-kuttl-default.svc:5000 interface = internal auth_type = password cafile = /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem ''[oslo_concurrency]'' lock_path = /var/lib/watcher/tmp ''[watcher_datasources]'' datasources = prometheus ''[cache]'' backend = oslo_cache.memcache_pool memcache_servers=memcached-0.memcached.watcher-kuttl-default.svc:11212 memcache_socket_timeout = 0.5 memcache_pool_connection_get_timeout = 1 enabled=true tls_enabled=true tls_certfile=/etc/pki/tls/certs/mtls.crt tls_keyfile=/etc/pki/tls/private/mtls.key tls_cafile=/etc/pki/tls/certs/mtls-ca.crt memcache_dead_retry = 30 ''[prometheus_client]'' host = metric-storage-prometheus.watcher-kuttl-default.svc port = 9090 cafile = /etc/pki/ca-trust/extracted/pem/prometheus/ca.crt ''[cinder_client]'' endpoint_type = internal ''[glance_client]'' endpoint_type = internal ''[ironic_client]'' endpoint_type = internal ''[keystone_client]'' interface = internal ''[neutron_client]'' endpoint_type = internal ''[nova_client]'' endpoint_type = internal ''[placement_client]'' interface = internal ''[watcher_cluster_data_model_collectors.compute]'' period = 900 ''[watcher_cluster_data_model_collectors.baremetal]'' period = 900 ''[watcher_cluster_data_model_collectors.storage]'' period = 900' - ' logger.go:42: 07:01:53 | watcher-tls/2-patch-mtls | + ''['' 1 == 1 '']''' - ' logger.go:42: 07:01:53 | watcher-tls/2-patch-mtls | ++ grep -c ''memcache_tls_cafile = /etc/pki/tls/certs/mtls-ca.crt''' - ' logger.go:42: 07:01:53 | watcher-tls/2-patch-mtls | +++ oc rsh -c watcher-api watcher-kuttl-api-0 cat /etc/watcher/watcher.conf.d/00-default.conf' - ' logger.go:42: 07:01:53 | watcher-tls/2-patch-mtls | ++ echo ''[DEFAULT]'' state_path = /var/lib/watcher transport_url = ''rabbit://**********=1'' control_exchange = watcher debug = True log_file = /var/log/watcher/watcher-kuttl-api.log ''#'' empty notification_level means that no notification will be sent notification_level = ''[database]'' connection = ''mysql+pymysql://watcher_95d6:0107df271c530ebe460a742c5441b891@openstack.watcher-kuttl-default.svc/watcher?read_default_file=/etc/my.cnf'' ''[oslo_policy]'' policy_file = /etc/watcher/policy.yaml.sample ''[oslo_messaging_notifications]'' driver = noop ''[oslo_messaging_rabbit]'' rabbit_quorum_queue=true rabbit_transient_quorum_queue=true amqp_durable_queues=true ''[keystone_authtoken]'' memcached_servers=memcached-0.memcached.watcher-kuttl-default.svc:11212 tls_enabled=true memcache_tls_certfile = /etc/pki/tls/certs/mtls.crt memcache_tls_keyfile = /etc/pki/tls/private/mtls.key memcache_tls_cafile = /etc/pki/tls/certs/mtls-ca.crt memcache_tls_enabled = true project_domain_name = Default project_name = service user_domain_name = Default password = pa**********rd username = watcher auth_url = https://keystone-internal.watcher-kuttl-default.svc:5000 interface = internal auth_type = password cafile = /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem ''[watcher_clients_auth]'' project_domain_name = Default project_name = service user_domain_name = Default password = pa**********rd username = watcher auth_url = https://keystone-internal.watcher-kuttl-default.svc:5000 interface = internal auth_type = password cafile = /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem ''[oslo_concurrency]'' lock_path = /var/lib/watcher/tmp ''[watcher_datasources]'' datasources = prometheus ''[cache]'' backend = oslo_cache.memcache_pool memcache_servers=memcached-0.memcached.watcher-kuttl-default.svc:11212 memcache_socket_timeout = 0.5 memcache_pool_connection_get_timeout = 1 enabled=true tls_enabled=true tls_certfile=/etc/pki/tls/certs/mtls.crt tls_keyfile=/etc/pki/tls/private/mtls.key tls_cafile=/etc/pki/tls/certs/mtls-ca.crt memcache_dead_retry = 30 ''[prometheus_client]'' host = metric-storage-prometheus.watcher-kuttl-default.svc port = 9090 cafile = /etc/pki/ca-trust/extracted/pem/prometheus/ca.crt ''[cinder_client]'' endpoint_type = internal ''[glance_client]'' endpoint_type = internal ''[ironic_client]'' endpoint_type = internal ''[keystone_client]'' interface = internal ''[neutron_client]'' endpoint_type = internal ''[nova_client]'' endpoint_type = internal ''[placement_client]'' interface = internal ''[watcher_cluster_data_model_collectors.compute]'' period = 900 ''[watcher_cluster_data_model_collectors.baremetal]'' period = 900 ''[watcher_cluster_data_model_collectors.storage]'' period = 900' - ' logger.go:42: 07:01:53 | watcher-tls/2-patch-mtls | + ''['' 1 == 1 '']''' - ' logger.go:42: 07:01:53 | watcher-tls/2-patch-mtls | ++ grep -c ''memcache_tls_enabled = true''' - ' logger.go:42: 07:01:53 | watcher-tls/2-patch-mtls | +++ oc rsh -c watcher-api watcher-kuttl-api-0 cat /etc/watcher/watcher.conf.d/00-default.conf' - ' logger.go:42: 07:01:53 | watcher-tls/2-patch-mtls | ++ echo ''[DEFAULT]'' state_path = /var/lib/watcher transport_url = ''rabbit://**********=1'' control_exchange = watcher debug = True log_file = /var/log/watcher/watcher-kuttl-api.log ''#'' empty notification_level means that no notification will be sent notification_level = ''[database]'' connection = ''mysql+pymysql://watcher_95d6:0107df271c530ebe460a742c5441b891@openstack.watcher-kuttl-default.svc/watcher?read_default_file=/etc/my.cnf'' ''[oslo_policy]'' policy_file = /etc/watcher/policy.yaml.sample ''[oslo_messaging_notifications]'' driver = noop ''[oslo_messaging_rabbit]'' rabbit_quorum_queue=true rabbit_transient_quorum_queue=true amqp_durable_queues=true ''[keystone_authtoken]'' memcached_servers=memcached-0.memcached.watcher-kuttl-default.svc:11212 tls_enabled=true memcache_tls_certfile = /etc/pki/tls/certs/mtls.crt memcache_tls_keyfile = /etc/pki/tls/private/mtls.key memcache_tls_cafile = /etc/pki/tls/certs/mtls-ca.crt memcache_tls_enabled = true project_domain_name = Default project_name = service user_domain_name = Default password = pa**********rd username = watcher auth_url = https://keystone-internal.watcher-kuttl-default.svc:5000 interface = internal auth_type = password cafile = /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem ''[watcher_clients_auth]'' project_domain_name = Default project_name = service user_domain_name = Default password = pa**********rd username = watcher auth_url = https://keystone-internal.watcher-kuttl-default.svc:5000 interface = internal auth_type = password cafile = /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem ''[oslo_concurrency]'' lock_path = /var/lib/watcher/tmp ''[watcher_datasources]'' datasources = prometheus ''[cache]'' backend = oslo_cache.memcache_pool memcache_servers=memcached-0.memcached.watcher-kuttl-default.svc:11212 memcache_socket_timeout = 0.5 memcache_pool_connection_get_timeout = 1 enabled=true tls_enabled=true tls_certfile=/etc/pki/tls/certs/mtls.crt tls_keyfile=/etc/pki/tls/private/mtls.key tls_cafile=/etc/pki/tls/certs/mtls-ca.crt memcache_dead_retry = 30 ''[prometheus_client]'' host = metric-storage-prometheus.watcher-kuttl-default.svc port = 9090 cafile = /etc/pki/ca-trust/extracted/pem/prometheus/ca.crt ''[cinder_client]'' endpoint_type = internal ''[glance_client]'' endpoint_type = internal ''[ironic_client]'' endpoint_type = internal ''[keystone_client]'' interface = internal ''[neutron_client]'' endpoint_type = internal ''[nova_client]'' endpoint_type = internal ''[placement_client]'' interface = internal ''[watcher_cluster_data_model_collectors.compute]'' period = 900 ''[watcher_cluster_data_model_collectors.baremetal]'' period = 900 ''[watcher_cluster_data_model_collectors.storage]'' period = 900' - ' logger.go:42: 07:01:53 | watcher-tls/2-patch-mtls | + ''['' 1 == 1 '']''' - ' logger.go:42: 07:01:53 | watcher-tls/2-patch-mtls | ++ grep -c tls_certfile=/etc/pki/tls/certs/mtls.crt' - ' logger.go:42: 07:01:53 | watcher-tls/2-patch-mtls | +++ oc rsh -c watcher-api watcher-kuttl-api-0 cat /etc/watcher/watcher.conf.d/00-default.conf' - ' logger.go:42: 07:01:53 | watcher-tls/2-patch-mtls | ++ echo ''[DEFAULT]'' state_path = /var/lib/watcher transport_url = ''rabbit://**********=1'' control_exchange = watcher debug = True log_file = /var/log/watcher/watcher-kuttl-api.log ''#'' empty notification_level means that no notification will be sent notification_level = ''[database]'' connection = ''mysql+pymysql://watcher_95d6:0107df271c530ebe460a742c5441b891@openstack.watcher-kuttl-default.svc/watcher?read_default_file=/etc/my.cnf'' ''[oslo_policy]'' policy_file = /etc/watcher/policy.yaml.sample ''[oslo_messaging_notifications]'' driver = noop ''[oslo_messaging_rabbit]'' rabbit_quorum_queue=true rabbit_transient_quorum_queue=true amqp_durable_queues=true ''[keystone_authtoken]'' memcached_servers=memcached-0.memcached.watcher-kuttl-default.svc:11212 tls_enabled=true memcache_tls_certfile = /etc/pki/tls/certs/mtls.crt memcache_tls_keyfile = /etc/pki/tls/private/mtls.key memcache_tls_cafile = /etc/pki/tls/certs/mtls-ca.crt memcache_tls_enabled = true project_domain_name = Default project_name = service user_domain_name = Default password = pa**********rd username = watcher auth_url = https://keystone-internal.watcher-kuttl-default.svc:5000 interface = internal auth_type = password cafile = /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem ''[watcher_clients_auth]'' project_domain_name = Default project_name = service user_domain_name = Default password = pa**********rd username = watcher auth_url = https://keystone-internal.watcher-kuttl-default.svc:5000 interface = internal auth_type = password cafile = /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem ''[oslo_concurrency]'' lock_path = /var/lib/watcher/tmp ''[watcher_datasources]'' datasources = prometheus ''[cache]'' backend = oslo_cache.memcache_pool memcache_servers=memcached-0.memcached.watcher-kuttl-default.svc:11212 memcache_socket_timeout = 0.5 memcache_pool_connection_get_timeout = 1 enabled=true tls_enabled=true tls_certfile=/etc/pki/tls/certs/mtls.crt tls_keyfile=/etc/pki/tls/private/mtls.key tls_cafile=/etc/pki/tls/certs/mtls-ca.crt memcache_dead_retry = 30 ''[prometheus_client]'' host = metric-storage-prometheus.watcher-kuttl-default.svc port = 9090 cafile = /etc/pki/ca-trust/extracted/pem/prometheus/ca.crt ''[cinder_client]'' endpoint_type = internal ''[glance_client]'' endpoint_type = internal ''[ironic_client]'' endpoint_type = internal ''[keystone_client]'' interface = internal ''[neutron_client]'' endpoint_type = internal ''[nova_client]'' endpoint_type = internal ''[placement_client]'' interface = internal ''[watcher_cluster_data_model_collectors.compute]'' period = 900 ''[watcher_cluster_data_model_collectors.baremetal]'' period = 900 ''[watcher_cluster_data_model_collectors.storage]'' period = 900' - ' logger.go:42: 07:01:53 | watcher-tls/2-patch-mtls | + ''['' 1 == 1 '']''' - ' logger.go:42: 07:01:53 | watcher-tls/2-patch-mtls | ++ grep -c tls_keyfile=/etc/pki/tls/private/mtls.key' - ' logger.go:42: 07:01:53 | watcher-tls/2-patch-mtls | +++ oc rsh -c watcher-api watcher-kuttl-api-0 cat /etc/watcher/watcher.conf.d/00-default.conf' - ' logger.go:42: 07:01:53 | watcher-tls/2-patch-mtls | ++ echo ''[DEFAULT]'' state_path = /var/lib/watcher transport_url = ''rabbit://**********=1'' control_exchange = watcher debug = True log_file = /var/log/watcher/watcher-kuttl-api.log ''#'' empty notification_level means that no notification will be sent notification_level = ''[database]'' connection = ''mysql+pymysql://watcher_95d6:0107df271c530ebe460a742c5441b891@openstack.watcher-kuttl-default.svc/watcher?read_default_file=/etc/my.cnf'' ''[oslo_policy]'' policy_file = /etc/watcher/policy.yaml.sample ''[oslo_messaging_notifications]'' driver = noop ''[oslo_messaging_rabbit]'' rabbit_quorum_queue=true rabbit_transient_quorum_queue=true amqp_durable_queues=true ''[keystone_authtoken]'' memcached_servers=memcached-0.memcached.watcher-kuttl-default.svc:11212 tls_enabled=true memcache_tls_certfile = /etc/pki/tls/certs/mtls.crt memcache_tls_keyfile = /etc/pki/tls/private/mtls.key memcache_tls_cafile = /etc/pki/tls/certs/mtls-ca.crt memcache_tls_enabled = true project_domain_name = Default project_name = service user_domain_name = Default password = pa**********rd username = watcher auth_url = https://keystone-internal.watcher-kuttl-default.svc:5000 interface = internal auth_type = password cafile = /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem ''[watcher_clients_auth]'' project_domain_name = Default project_name = service user_domain_name = Default password = pa**********rd username = watcher auth_url = https://keystone-internal.watcher-kuttl-default.svc:5000 interface = internal auth_type = password cafile = /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem ''[oslo_concurrency]'' lock_path = /var/lib/watcher/tmp ''[watcher_datasources]'' datasources = prometheus ''[cache]'' backend = oslo_cache.memcache_pool memcache_servers=memcached-0.memcached.watcher-kuttl-default.svc:11212 memcache_socket_timeout = 0.5 memcache_pool_connection_get_timeout = 1 enabled=true tls_enabled=true tls_certfile=/etc/pki/tls/certs/mtls.crt tls_keyfile=/etc/pki/tls/private/mtls.key tls_cafile=/etc/pki/tls/certs/mtls-ca.crt memcache_dead_retry = 30 ''[prometheus_client]'' host = metric-storage-prometheus.watcher-kuttl-default.svc port = 9090 cafile = /etc/pki/ca-trust/extracted/pem/prometheus/ca.crt ''[cinder_client]'' endpoint_type = internal ''[glance_client]'' endpoint_type = internal ''[ironic_client]'' endpoint_type = internal ''[keystone_client]'' interface = internal ''[neutron_client]'' endpoint_type = internal ''[nova_client]'' endpoint_type = internal ''[placement_client]'' interface = internal ''[watcher_cluster_data_model_collectors.compute]'' period = 900 ''[watcher_cluster_data_model_collectors.baremetal]'' period = 900 ''[watcher_cluster_data_model_collectors.storage]'' period = 900' - ' logger.go:42: 07:01:53 | watcher-tls/2-patch-mtls | + ''['' 1 == 1 '']''' - ' logger.go:42: 07:01:53 | watcher-tls/2-patch-mtls | +++ oc rsh -c watcher-api watcher-kuttl-api-0 cat /etc/watcher/watcher.conf.d/00-default.conf' - ' logger.go:42: 07:01:53 | watcher-tls/2-patch-mtls | ++ grep -c tls_cafile=/etc/pki/tls/certs/mtls-ca.crt' - ' logger.go:42: 07:01:54 | watcher-tls/2-patch-mtls | ++ echo ''[DEFAULT]'' state_path = /var/lib/watcher transport_url = ''rabbit://**********=1'' control_exchange = watcher debug = True log_file = /var/log/watcher/watcher-kuttl-api.log ''#'' empty notification_level means that no notification will be sent notification_level = ''[database]'' connection = ''mysql+pymysql://watcher_95d6:0107df271c530ebe460a742c5441b891@openstack.watcher-kuttl-default.svc/watcher?read_default_file=/etc/my.cnf'' ''[oslo_policy]'' policy_file = /etc/watcher/policy.yaml.sample ''[oslo_messaging_notifications]'' driver = noop ''[oslo_messaging_rabbit]'' rabbit_quorum_queue=true rabbit_transient_quorum_queue=true amqp_durable_queues=true ''[keystone_authtoken]'' memcached_servers=memcached-0.memcached.watcher-kuttl-default.svc:11212 tls_enabled=true memcache_tls_certfile = /etc/pki/tls/certs/mtls.crt memcache_tls_keyfile = /etc/pki/tls/private/mtls.key memcache_tls_cafile = /etc/pki/tls/certs/mtls-ca.crt memcache_tls_enabled = true project_domain_name = Default project_name = service user_domain_name = Default password = pa**********rd username = watcher auth_url = https://keystone-internal.watcher-kuttl-default.svc:5000 interface = internal auth_type = password cafile = /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem ''[watcher_clients_auth]'' project_domain_name = Default project_name = service user_domain_name = Default password = pa**********rd username = watcher auth_url = https://keystone-internal.watcher-kuttl-default.svc:5000 interface = internal auth_type = password cafile = /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem ''[oslo_concurrency]'' lock_path = /var/lib/watcher/tmp ''[watcher_datasources]'' datasources = prometheus ''[cache]'' backend = oslo_cache.memcache_pool memcache_servers=memcached-0.memcached.watcher-kuttl-default.svc:11212 memcache_socket_timeout = 0.5 memcache_pool_connection_get_timeout = 1 enabled=true tls_enabled=true tls_certfile=/etc/pki/tls/certs/mtls.crt tls_keyfile=/etc/pki/tls/private/mtls.key tls_cafile=/etc/pki/tls/certs/mtls-ca.crt memcache_dead_retry = 30 ''[prometheus_client]'' host = metric-storage-prometheus.watcher-kuttl-default.svc port = 9090 cafile = /etc/pki/ca-trust/extracted/pem/prometheus/ca.crt ''[cinder_client]'' endpoint_type = internal ''[glance_client]'' endpoint_type = internal ''[ironic_client]'' endpoint_type = internal ''[keystone_client]'' interface = internal ''[neutron_client]'' endpoint_type = internal ''[nova_client]'' endpoint_type = internal ''[placement_client]'' interface = internal ''[watcher_cluster_data_model_collectors.compute]'' period = 900 ''[watcher_cluster_data_model_collectors.baremetal]'' period = 900 ''[watcher_cluster_data_model_collectors.storage]'' period = 900' - ' logger.go:42: 07:01:54 | watcher-tls/2-patch-mtls | + ''['' 1 == 1 '']''' - ' logger.go:42: 07:01:54 | watcher-tls/2-patch-mtls | + ''['' -n watcher-kuttl-applier-0 '']''' - ' logger.go:42: 07:01:54 | watcher-tls/2-patch-mtls | + echo ''Checking watcher-applier config...''' - ' logger.go:42: 07:01:54 | watcher-tls/2-patch-mtls | Checking watcher-applier config...' - ' logger.go:42: 07:01:54 | watcher-tls/2-patch-mtls | ++ grep -c ''memcache_tls_certfile = /etc/pki/tls/certs/mtls.crt ''' - ' logger.go:42: 07:01:54 | watcher-tls/2-patch-mtls | +++ oc rsh -c watcher-applier watcher-kuttl-applier-0 cat /etc/watcher/watcher.conf.d/00-default.conf' - ' logger.go:42: 07:01:54 | watcher-tls/2-patch-mtls | ++ echo ''[DEFAULT]'' state_path = /var/lib/watcher transport_url = ''rabbit://**********=1'' control_exchange = watcher debug = True ''#'' empty notification_level means that no notification will be sent notification_level = ''[database]'' connection = ''mysql+pymysql://watcher_95d6:0107df271c530ebe460a742c5441b891@openstack.watcher-kuttl-default.svc/watcher?read_default_file=/etc/my.cnf'' ''[oslo_policy]'' policy_file = /etc/watcher/policy.yaml.sample ''[oslo_messaging_notifications]'' driver = noop ''[oslo_messaging_rabbit]'' rabbit_quorum_queue=true rabbit_transient_quorum_queue=true amqp_durable_queues=true ''[keystone_authtoken]'' memcached_servers=memcached-0.memcached.watcher-kuttl-default.svc:11212 tls_enabled=true memcache_tls_certfile = /etc/pki/tls/certs/mtls.crt memcache_tls_keyfile = /etc/pki/tls/private/mtls.key memcache_tls_cafile = /etc/pki/tls/certs/mtls-ca.crt memcache_tls_enabled = true project_domain_name = Default project_name = service user_domain_name = Default password = pa**********rd username = watcher auth_url = https://keystone-internal.watcher-kuttl-default.svc:5000 interface = internal auth_type = password cafile = /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem ''[watcher_clients_auth]'' project_domain_name = Default project_name = service user_domain_name = Default password = pa**********rd username = watcher auth_url = https://keystone-internal.watcher-kuttl-default.svc:5000 interface = internal auth_type = password cafile = /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem ''[oslo_concurrency]'' lock_path = /var/lib/watcher/tmp ''[watcher_datasources]'' datasources = prometheus ''[cache]'' backend = oslo_cache.memcache_pool memcache_servers=memcached-0.memcached.watcher-kuttl-default.svc:11212 memcache_socket_timeout = 0.5 memcache_pool_connection_get_timeout = 1 enabled=true tls_enabled=true tls_certfile=/etc/pki/tls/certs/mtls.crt tls_keyfile=/etc/pki/tls/private/mtls.key tls_cafile=/etc/pki/tls/certs/mtls-ca.crt memcache_dead_retry = 30 ''[cinder_client]'' endpoint_type = internal ''[glance_client]'' endpoint_type = internal ''[ironic_client]'' endpoint_type = internal ''[keystone_client]'' interface = internal ''[neutron_client]'' endpoint_type = internal ''[nova_client]'' endpoint_type = internal ''[placement_client]'' interface = internal ''[watcher_cluster_data_model_collectors.compute]'' period = 900 ''[watcher_cluster_data_model_collectors.baremetal]'' period = 900 ''[watcher_cluster_data_model_collectors.storage]'' period = 900' - ' logger.go:42: 07:01:54 | watcher-tls/2-patch-mtls | + ''['' 1 == 1 '']''' - ' logger.go:42: 07:01:54 | watcher-tls/2-patch-mtls | ++ grep -c ''memcache_tls_keyfile = /etc/pki/tls/private/mtls.key''' - ' logger.go:42: 07:01:54 | watcher-tls/2-patch-mtls | +++ oc rsh -c watcher-applier watcher-kuttl-applier-0 cat /etc/watcher/watcher.conf.d/00-default.conf' - ' logger.go:42: 07:01:54 | watcher-tls/2-patch-mtls | ++ echo ''[DEFAULT]'' state_path = /var/lib/watcher transport_url = ''rabbit://**********=1'' control_exchange = watcher debug = True ''#'' empty notification_level means that no notification will be sent notification_level = ''[database]'' connection = ''mysql+pymysql://watcher_95d6:0107df271c530ebe460a742c5441b891@openstack.watcher-kuttl-default.svc/watcher?read_default_file=/etc/my.cnf'' ''[oslo_policy]'' policy_file = /etc/watcher/policy.yaml.sample ''[oslo_messaging_notifications]'' driver = noop ''[oslo_messaging_rabbit]'' rabbit_quorum_queue=true rabbit_transient_quorum_queue=true amqp_durable_queues=true ''[keystone_authtoken]'' memcached_servers=memcached-0.memcached.watcher-kuttl-default.svc:11212 tls_enabled=true memcache_tls_certfile = /etc/pki/tls/certs/mtls.crt memcache_tls_keyfile = /etc/pki/tls/private/mtls.key memcache_tls_cafile = /etc/pki/tls/certs/mtls-ca.crt memcache_tls_enabled = true project_domain_name = Default project_name = service user_domain_name = Default password = pa**********rd username = watcher auth_url = https://keystone-internal.watcher-kuttl-default.svc:5000 interface = internal auth_type = password cafile = /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem ''[watcher_clients_auth]'' project_domain_name = Default project_name = service user_domain_name = Default password = pa**********rd username = watcher auth_url = https://keystone-internal.watcher-kuttl-default.svc:5000 interface = internal auth_type = password cafile = /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem ''[oslo_concurrency]'' lock_path = /var/lib/watcher/tmp ''[watcher_datasources]'' datasources = prometheus ''[cache]'' backend = oslo_cache.memcache_pool memcache_servers=memcached-0.memcached.watcher-kuttl-default.svc:11212 memcache_socket_timeout = 0.5 memcache_pool_connection_get_timeout = 1 enabled=true tls_enabled=true tls_certfile=/etc/pki/tls/certs/mtls.crt tls_keyfile=/etc/pki/tls/private/mtls.key tls_cafile=/etc/pki/tls/certs/mtls-ca.crt memcache_dead_retry = 30 ''[cinder_client]'' endpoint_type = internal ''[glance_client]'' endpoint_type = internal ''[ironic_client]'' endpoint_type = internal ''[keystone_client]'' interface = internal ''[neutron_client]'' endpoint_type = internal ''[nova_client]'' endpoint_type = internal ''[placement_client]'' interface = internal ''[watcher_cluster_data_model_collectors.compute]'' period = 900 ''[watcher_cluster_data_model_collectors.baremetal]'' period = 900 ''[watcher_cluster_data_model_collectors.storage]'' period = 900' - ' logger.go:42: 07:01:54 | watcher-tls/2-patch-mtls | + ''['' 1 == 1 '']''' - ' logger.go:42: 07:01:54 | watcher-tls/2-patch-mtls | ++ grep -c ''memcache_tls_cafile = /etc/pki/tls/certs/mtls-ca.crt''' - ' logger.go:42: 07:01:54 | watcher-tls/2-patch-mtls | +++ oc rsh -c watcher-applier watcher-kuttl-applier-0 cat /etc/watcher/watcher.conf.d/00-default.conf' - ' logger.go:42: 07:01:55 | watcher-tls/2-patch-mtls | ++ echo ''[DEFAULT]'' state_path = /var/lib/watcher transport_url = ''rabbit://**********=1'' control_exchange = watcher debug = True ''#'' empty notification_level means that no notification will be sent notification_level = ''[database]'' connection = ''mysql+pymysql://watcher_95d6:0107df271c530ebe460a742c5441b891@openstack.watcher-kuttl-default.svc/watcher?read_default_file=/etc/my.cnf'' ''[oslo_policy]'' policy_file = /etc/watcher/policy.yaml.sample ''[oslo_messaging_notifications]'' driver = noop ''[oslo_messaging_rabbit]'' rabbit_quorum_queue=true rabbit_transient_quorum_queue=true amqp_durable_queues=true ''[keystone_authtoken]'' memcached_servers=memcached-0.memcached.watcher-kuttl-default.svc:11212 tls_enabled=true memcache_tls_certfile = /etc/pki/tls/certs/mtls.crt memcache_tls_keyfile = /etc/pki/tls/private/mtls.key memcache_tls_cafile = /etc/pki/tls/certs/mtls-ca.crt memcache_tls_enabled = true project_domain_name = Default project_name = service user_domain_name = Default password = pa**********rd username = watcher auth_url = https://keystone-internal.watcher-kuttl-default.svc:5000 interface = internal auth_type = password cafile = /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem ''[watcher_clients_auth]'' project_domain_name = Default project_name = service user_domain_name = Default password = pa**********rd username = watcher auth_url = https://keystone-internal.watcher-kuttl-default.svc:5000 interface = internal auth_type = password cafile = /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem ''[oslo_concurrency]'' lock_path = /var/lib/watcher/tmp ''[watcher_datasources]'' datasources = prometheus ''[cache]'' backend = oslo_cache.memcache_pool memcache_servers=memcached-0.memcached.watcher-kuttl-default.svc:11212 memcache_socket_timeout = 0.5 memcache_pool_connection_get_timeout = 1 enabled=true tls_enabled=true tls_certfile=/etc/pki/tls/certs/mtls.crt tls_keyfile=/etc/pki/tls/private/mtls.key tls_cafile=/etc/pki/tls/certs/mtls-ca.crt memcache_dead_retry = 30 ''[cinder_client]'' endpoint_type = internal ''[glance_client]'' endpoint_type = internal ''[ironic_client]'' endpoint_type = internal ''[keystone_client]'' interface = internal ''[neutron_client]'' endpoint_type = internal ''[nova_client]'' endpoint_type = internal ''[placement_client]'' interface = internal ''[watcher_cluster_data_model_collectors.compute]'' period = 900 ''[watcher_cluster_data_model_collectors.baremetal]'' period = 900 ''[watcher_cluster_data_model_collectors.storage]'' period = 900' - ' logger.go:42: 07:01:55 | watcher-tls/2-patch-mtls | + ''['' 1 == 1 '']''' - ' logger.go:42: 07:01:55 | watcher-tls/2-patch-mtls | ++ grep -c ''memcache_tls_enabled = true''' - ' logger.go:42: 07:01:55 | watcher-tls/2-patch-mtls | +++ oc rsh -c watcher-applier watcher-kuttl-applier-0 cat /etc/watcher/watcher.conf.d/00-default.conf' - ' logger.go:42: 07:01:55 | watcher-tls/2-patch-mtls | ++ echo ''[DEFAULT]'' state_path = /var/lib/watcher transport_url = ''rabbit://**********=1'' control_exchange = watcher debug = True ''#'' empty notification_level means that no notification will be sent notification_level = ''[database]'' connection = ''mysql+pymysql://watcher_95d6:0107df271c530ebe460a742c5441b891@openstack.watcher-kuttl-default.svc/watcher?read_default_file=/etc/my.cnf'' ''[oslo_policy]'' policy_file = /etc/watcher/policy.yaml.sample ''[oslo_messaging_notifications]'' driver = noop ''[oslo_messaging_rabbit]'' rabbit_quorum_queue=true rabbit_transient_quorum_queue=true amqp_durable_queues=true ''[keystone_authtoken]'' memcached_servers=memcached-0.memcached.watcher-kuttl-default.svc:11212 tls_enabled=true memcache_tls_certfile = /etc/pki/tls/certs/mtls.crt memcache_tls_keyfile = /etc/pki/tls/private/mtls.key memcache_tls_cafile = /etc/pki/tls/certs/mtls-ca.crt memcache_tls_enabled = true project_domain_name = Default project_name = service user_domain_name = Default password = pa**********rd username = watcher auth_url = https://keystone-internal.watcher-kuttl-default.svc:5000 interface = internal auth_type = password cafile = /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem ''[watcher_clients_auth]'' project_domain_name = Default project_name = service user_domain_name = Default password = pa**********rd username = watcher auth_url = https://keystone-internal.watcher-kuttl-default.svc:5000 interface = internal auth_type = password cafile = /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem ''[oslo_concurrency]'' lock_path = /var/lib/watcher/tmp ''[watcher_datasources]'' datasources = prometheus ''[cache]'' backend = oslo_cache.memcache_pool memcache_servers=memcached-0.memcached.watcher-kuttl-default.svc:11212 memcache_socket_timeout = 0.5 memcache_pool_connection_get_timeout = 1 enabled=true tls_enabled=true tls_certfile=/etc/pki/tls/certs/mtls.crt tls_keyfile=/etc/pki/tls/private/mtls.key tls_cafile=/etc/pki/tls/certs/mtls-ca.crt memcache_dead_retry = 30 ''[cinder_client]'' endpoint_type = internal ''[glance_client]'' endpoint_type = internal ''[ironic_client]'' endpoint_type = internal ''[keystone_client]'' interface = internal ''[neutron_client]'' endpoint_type = internal ''[nova_client]'' endpoint_type = internal ''[placement_client]'' interface = internal ''[watcher_cluster_data_model_collectors.compute]'' period = 900 ''[watcher_cluster_data_model_collectors.baremetal]'' period = 900 ''[watcher_cluster_data_model_collectors.storage]'' period = 900' - ' logger.go:42: 07:01:55 | watcher-tls/2-patch-mtls | + ''['' 1 == 1 '']''' - ' logger.go:42: 07:01:55 | watcher-tls/2-patch-mtls | ++ grep -c tls_certfile=/etc/pki/tls/certs/mtls.crt' - ' logger.go:42: 07:01:55 | watcher-tls/2-patch-mtls | +++ oc rsh -c watcher-applier watcher-kuttl-applier-0 cat /etc/watcher/watcher.conf.d/00-default.conf' - ' logger.go:42: 07:01:55 | watcher-tls/2-patch-mtls | ++ echo ''[DEFAULT]'' state_path = /var/lib/watcher transport_url = ''rabbit://**********=1'' control_exchange = watcher debug = True ''#'' empty notification_level means that no notification will be sent notification_level = ''[database]'' connection = ''mysql+pymysql://watcher_95d6:0107df271c530ebe460a742c5441b891@openstack.watcher-kuttl-default.svc/watcher?read_default_file=/etc/my.cnf'' ''[oslo_policy]'' policy_file = /etc/watcher/policy.yaml.sample ''[oslo_messaging_notifications]'' driver = noop ''[oslo_messaging_rabbit]'' rabbit_quorum_queue=true rabbit_transient_quorum_queue=true amqp_durable_queues=true ''[keystone_authtoken]'' memcached_servers=memcached-0.memcached.watcher-kuttl-default.svc:11212 tls_enabled=true memcache_tls_certfile = /etc/pki/tls/certs/mtls.crt memcache_tls_keyfile = /etc/pki/tls/private/mtls.key memcache_tls_cafile = /etc/pki/tls/certs/mtls-ca.crt memcache_tls_enabled = true project_domain_name = Default project_name = service user_domain_name = Default password = pa**********rd username = watcher auth_url = https://keystone-internal.watcher-kuttl-default.svc:5000 interface = internal auth_type = password cafile = /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem ''[watcher_clients_auth]'' project_domain_name = Default project_name = service user_domain_name = Default password = pa**********rd username = watcher auth_url = https://keystone-internal.watcher-kuttl-default.svc:5000 interface = internal auth_type = password cafile = /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem ''[oslo_concurrency]'' lock_path = /var/lib/watcher/tmp ''[watcher_datasources]'' datasources = prometheus ''[cache]'' backend = oslo_cache.memcache_pool memcache_servers=memcached-0.memcached.watcher-kuttl-default.svc:11212 memcache_socket_timeout = 0.5 memcache_pool_connection_get_timeout = 1 enabled=true tls_enabled=true tls_certfile=/etc/pki/tls/certs/mtls.crt tls_keyfile=/etc/pki/tls/private/mtls.key tls_cafile=/etc/pki/tls/certs/mtls-ca.crt memcache_dead_retry = 30 ''[cinder_client]'' endpoint_type = internal ''[glance_client]'' endpoint_type = internal ''[ironic_client]'' endpoint_type = internal ''[keystone_client]'' interface = internal ''[neutron_client]'' endpoint_type = internal ''[nova_client]'' endpoint_type = internal ''[placement_client]'' interface = internal ''[watcher_cluster_data_model_collectors.compute]'' period = 900 ''[watcher_cluster_data_model_collectors.baremetal]'' period = 900 ''[watcher_cluster_data_model_collectors.storage]'' period = 900' - ' logger.go:42: 07:01:55 | watcher-tls/2-patch-mtls | + ''['' 1 == 1 '']''' - ' logger.go:42: 07:01:55 | watcher-tls/2-patch-mtls | ++ grep -c tls_keyfile=/etc/pki/tls/private/mtls.key' - ' logger.go:42: 07:01:55 | watcher-tls/2-patch-mtls | +++ oc rsh -c watcher-applier watcher-kuttl-applier-0 cat /etc/watcher/watcher.conf.d/00-default.conf' - ' logger.go:42: 07:01:55 | watcher-tls/2-patch-mtls | ++ echo ''[DEFAULT]'' state_path = /var/lib/watcher transport_url = ''rabbit://**********=1'' control_exchange = watcher debug = True ''#'' empty notification_level means that no notification will be sent notification_level = ''[database]'' connection = ''mysql+pymysql://watcher_95d6:0107df271c530ebe460a742c5441b891@openstack.watcher-kuttl-default.svc/watcher?read_default_file=/etc/my.cnf'' ''[oslo_policy]'' policy_file = /etc/watcher/policy.yaml.sample ''[oslo_messaging_notifications]'' driver = noop ''[oslo_messaging_rabbit]'' rabbit_quorum_queue=true rabbit_transient_quorum_queue=true amqp_durable_queues=true ''[keystone_authtoken]'' memcached_servers=memcached-0.memcached.watcher-kuttl-default.svc:11212 tls_enabled=true memcache_tls_certfile = /etc/pki/tls/certs/mtls.crt memcache_tls_keyfile = /etc/pki/tls/private/mtls.key memcache_tls_cafile = /etc/pki/tls/certs/mtls-ca.crt memcache_tls_enabled = true project_domain_name = Default project_name = service user_domain_name = Default password = pa**********rd username = watcher auth_url = https://keystone-internal.watcher-kuttl-default.svc:5000 interface = internal auth_type = password cafile = /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem ''[watcher_clients_auth]'' project_domain_name = Default project_name = service user_domain_name = Default password = pa**********rd username = watcher auth_url = https://keystone-internal.watcher-kuttl-default.svc:5000 interface = internal auth_type = password cafile = /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem ''[oslo_concurrency]'' lock_path = /var/lib/watcher/tmp ''[watcher_datasources]'' datasources = prometheus ''[cache]'' backend = oslo_cache.memcache_pool memcache_servers=memcached-0.memcached.watcher-kuttl-default.svc:11212 memcache_socket_timeout = 0.5 memcache_pool_connection_get_timeout = 1 enabled=true tls_enabled=true tls_certfile=/etc/pki/tls/certs/mtls.crt tls_keyfile=/etc/pki/tls/private/mtls.key tls_cafile=/etc/pki/tls/certs/mtls-ca.crt memcache_dead_retry = 30 ''[cinder_client]'' endpoint_type = internal ''[glance_client]'' endpoint_type = internal ''[ironic_client]'' endpoint_type = internal ''[keystone_client]'' interface = internal ''[neutron_client]'' endpoint_type = internal ''[nova_client]'' endpoint_type = internal ''[placement_client]'' interface = internal ''[watcher_cluster_data_model_collectors.compute]'' period = 900 ''[watcher_cluster_data_model_collectors.baremetal]'' period = 900 ''[watcher_cluster_data_model_collectors.storage]'' period = 900' - ' logger.go:42: 07:01:55 | watcher-tls/2-patch-mtls | + ''['' 1 == 1 '']''' - ' logger.go:42: 07:01:55 | watcher-tls/2-patch-mtls | ++ grep -c tls_cafile=/etc/pki/tls/certs/mtls-ca.crt' - ' logger.go:42: 07:01:55 | watcher-tls/2-patch-mtls | +++ oc rsh -c watcher-applier watcher-kuttl-applier-0 cat /etc/watcher/watcher.conf.d/00-default.conf' - ' logger.go:42: 07:01:55 | watcher-tls/2-patch-mtls | ++ echo ''[DEFAULT]'' state_path = /var/lib/watcher transport_url = ''rabbit://**********=1'' control_exchange = watcher debug = True ''#'' empty notification_level means that no notification will be sent notification_level = ''[database]'' connection = ''mysql+pymysql://watcher_95d6:0107df271c530ebe460a742c5441b891@openstack.watcher-kuttl-default.svc/watcher?read_default_file=/etc/my.cnf'' ''[oslo_policy]'' policy_file = /etc/watcher/policy.yaml.sample ''[oslo_messaging_notifications]'' driver = noop ''[oslo_messaging_rabbit]'' rabbit_quorum_queue=true rabbit_transient_quorum_queue=true amqp_durable_queues=true ''[keystone_authtoken]'' memcached_servers=memcached-0.memcached.watcher-kuttl-default.svc:11212 tls_enabled=true memcache_tls_certfile = /etc/pki/tls/certs/mtls.crt memcache_tls_keyfile = /etc/pki/tls/private/mtls.key memcache_tls_cafile = /etc/pki/tls/certs/mtls-ca.crt memcache_tls_enabled = true project_domain_name = Default project_name = service user_domain_name = Default password = pa**********rd username = watcher auth_url = https://keystone-internal.watcher-kuttl-default.svc:5000 interface = internal auth_type = password cafile = /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem ''[watcher_clients_auth]'' project_domain_name = Default project_name = service user_domain_name = Default password = pa**********rd username = watcher auth_url = https://keystone-internal.watcher-kuttl-default.svc:5000 interface = internal auth_type = password cafile = /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem ''[oslo_concurrency]'' lock_path = /var/lib/watcher/tmp ''[watcher_datasources]'' datasources = prometheus ''[cache]'' backend = oslo_cache.memcache_pool memcache_servers=memcached-0.memcached.watcher-kuttl-default.svc:11212 memcache_socket_timeout = 0.5 memcache_pool_connection_get_timeout = 1 enabled=true tls_enabled=true tls_certfile=/etc/pki/tls/certs/mtls.crt tls_keyfile=/etc/pki/tls/private/mtls.key tls_cafile=/etc/pki/tls/certs/mtls-ca.crt memcache_dead_retry = 30 ''[cinder_client]'' endpoint_type = internal ''[glance_client]'' endpoint_type = internal ''[ironic_client]'' endpoint_type = internal ''[keystone_client]'' interface = internal ''[neutron_client]'' endpoint_type = internal ''[nova_client]'' endpoint_type = internal ''[placement_client]'' interface = internal ''[watcher_cluster_data_model_collectors.compute]'' period = 900 ''[watcher_cluster_data_model_collectors.baremetal]'' period = 900 ''[watcher_cluster_data_model_collectors.storage]'' period = 900' - ' logger.go:42: 07:01:55 | watcher-tls/2-patch-mtls | + ''['' 1 == 1 '']''' - ' logger.go:42: 07:01:55 | watcher-tls/2-patch-mtls | + ''['' -n watcher-kuttl-decision-engine-0 '']''' - ' logger.go:42: 07:01:55 | watcher-tls/2-patch-mtls | + echo ''Checking watcher-decision-engine config...''' - ' logger.go:42: 07:01:55 | watcher-tls/2-patch-mtls | Checking watcher-decision-engine config...' - ' logger.go:42: 07:01:55 | watcher-tls/2-patch-mtls | ++ grep -c ''memcache_tls_certfile = /etc/pki/tls/certs/mtls.crt ''' - ' logger.go:42: 07:01:55 | watcher-tls/2-patch-mtls | +++ oc rsh -c watcher-decision-engine watcher-kuttl-decision-engine-0 cat /etc/watcher/watcher.conf.d/00-default.conf' - ' logger.go:42: 07:01:56 | watcher-tls/2-patch-mtls | ++ echo ''[DEFAULT]'' state_path = /var/lib/watcher transport_url = ''rabbit://**********=1'' control_exchange = watcher debug = True ''#'' empty notification_level means that no notification will be sent notification_level = ''[database]'' connection = ''mysql+pymysql://watcher_95d6:0107df271c530ebe460a742c5441b891@openstack.watcher-kuttl-default.svc/watcher?read_default_file=/etc/my.cnf'' ''[oslo_policy]'' policy_file = /etc/watcher/policy.yaml.sample ''[oslo_messaging_notifications]'' driver = noop ''[oslo_messaging_rabbit]'' rabbit_quorum_queue=true rabbit_transient_quorum_queue=true amqp_durable_queues=true ''[keystone_authtoken]'' memcached_servers=memcached-0.memcached.watcher-kuttl-default.svc:11212 tls_enabled=true memcache_tls_certfile = /etc/pki/tls/certs/mtls.crt memcache_tls_keyfile = /etc/pki/tls/private/mtls.key memcache_tls_cafile = /etc/pki/tls/certs/mtls-ca.crt memcache_tls_enabled = true project_domain_name = Default project_name = service user_domain_name = Default password = pa**********rd username = watcher auth_url = https://keystone-internal.watcher-kuttl-default.svc:5000 interface = internal auth_type = password cafile = /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem ''[watcher_clients_auth]'' project_domain_name = Default project_name = service user_domain_name = Default password = pa**********rd username = watcher auth_url = https://keystone-internal.watcher-kuttl-default.svc:5000 interface = internal auth_type = password cafile = /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem ''[oslo_concurrency]'' lock_path = /var/lib/watcher/tmp ''[watcher_datasources]'' datasources = prometheus ''[cache]'' backend = oslo_cache.memcache_pool memcache_servers=memcached-0.memcached.watcher-kuttl-default.svc:11212 memcache_socket_timeout = 0.5 memcache_pool_connection_get_timeout = 1 enabled=true tls_enabled=true tls_certfile=/etc/pki/tls/certs/mtls.crt tls_keyfile=/etc/pki/tls/private/mtls.key tls_cafile=/etc/pki/tls/certs/mtls-ca.crt memcache_dead_retry = 30 ''[prometheus_client]'' host = metric-storage-prometheus.watcher-kuttl-default.svc port = 9090 cafile = /etc/pki/ca-trust/extracted/pem/prometheus/ca.crt ''[cinder_client]'' endpoint_type = internal ''[glance_client]'' endpoint_type = internal ''[ironic_client]'' endpoint_type = internal ''[keystone_client]'' interface = internal ''[neutron_client]'' endpoint_type = internal ''[nova_client]'' endpoint_type = internal ''[placement_client]'' interface = internal ''[watcher_cluster_data_model_collectors.compute]'' period = 900 ''[watcher_cluster_data_model_collectors.baremetal]'' period = 900 ''[watcher_cluster_data_model_collectors.storage]'' period = 900' - ' logger.go:42: 07:01:56 | watcher-tls/2-patch-mtls | + ''['' 1 == 1 '']''' - ' logger.go:42: 07:01:56 | watcher-tls/2-patch-mtls | ++ grep -c ''memcache_tls_keyfile = /etc/pki/tls/private/mtls.key''' - ' logger.go:42: 07:01:56 | watcher-tls/2-patch-mtls | +++ oc rsh -c watcher-decision-engine watcher-kuttl-decision-engine-0 cat /etc/watcher/watcher.conf.d/00-default.conf' - ' logger.go:42: 07:01:56 | watcher-tls/2-patch-mtls | ++ echo ''[DEFAULT]'' state_path = /var/lib/watcher transport_url = ''rabbit://**********=1'' control_exchange = watcher debug = True ''#'' empty notification_level means that no notification will be sent notification_level = ''[database]'' connection = ''mysql+pymysql://watcher_95d6:0107df271c530ebe460a742c5441b891@openstack.watcher-kuttl-default.svc/watcher?read_default_file=/etc/my.cnf'' ''[oslo_policy]'' policy_file = /etc/watcher/policy.yaml.sample ''[oslo_messaging_notifications]'' driver = noop ''[oslo_messaging_rabbit]'' rabbit_quorum_queue=true rabbit_transient_quorum_queue=true amqp_durable_queues=true ''[keystone_authtoken]'' memcached_servers=memcached-0.memcached.watcher-kuttl-default.svc:11212 tls_enabled=true memcache_tls_certfile = /etc/pki/tls/certs/mtls.crt memcache_tls_keyfile = /etc/pki/tls/private/mtls.key memcache_tls_cafile = /etc/pki/tls/certs/mtls-ca.crt memcache_tls_enabled = true project_domain_name = Default project_name = service user_domain_name = Default password = pa**********rd username = watcher auth_url = https://keystone-internal.watcher-kuttl-default.svc:5000 interface = internal auth_type = password cafile = /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem ''[watcher_clients_auth]'' project_domain_name = Default project_name = service user_domain_name = Default password = pa**********rd username = watcher auth_url = https://keystone-internal.watcher-kuttl-default.svc:5000 interface = internal auth_type = password cafile = /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem ''[oslo_concurrency]'' lock_path = /var/lib/watcher/tmp ''[watcher_datasources]'' datasources = prometheus ''[cache]'' backend = oslo_cache.memcache_pool memcache_servers=memcached-0.memcached.watcher-kuttl-default.svc:11212 memcache_socket_timeout = 0.5 memcache_pool_connection_get_timeout = 1 enabled=true tls_enabled=true tls_certfile=/etc/pki/tls/certs/mtls.crt tls_keyfile=/etc/pki/tls/private/mtls.key tls_cafile=/etc/pki/tls/certs/mtls-ca.crt memcache_dead_retry = 30 ''[prometheus_client]'' host = metric-storage-prometheus.watcher-kuttl-default.svc port = 9090 cafile = /etc/pki/ca-trust/extracted/pem/prometheus/ca.crt ''[cinder_client]'' endpoint_type = internal ''[glance_client]'' endpoint_type = internal ''[ironic_client]'' endpoint_type = internal ''[keystone_client]'' interface = internal ''[neutron_client]'' endpoint_type = internal ''[nova_client]'' endpoint_type = internal ''[placement_client]'' interface = internal ''[watcher_cluster_data_model_collectors.compute]'' period = 900 ''[watcher_cluster_data_model_collectors.baremetal]'' period = 900 ''[watcher_cluster_data_model_collectors.storage]'' period = 900' - ' logger.go:42: 07:01:56 | watcher-tls/2-patch-mtls | + ''['' 1 == 1 '']''' - ' logger.go:42: 07:01:56 | watcher-tls/2-patch-mtls | ++ grep -c ''memcache_tls_cafile = /etc/pki/tls/certs/mtls-ca.crt''' - ' logger.go:42: 07:01:56 | watcher-tls/2-patch-mtls | +++ oc rsh -c watcher-decision-engine watcher-kuttl-decision-engine-0 cat /etc/watcher/watcher.conf.d/00-default.conf' - ' logger.go:42: 07:01:56 | watcher-tls/2-patch-mtls | ++ echo ''[DEFAULT]'' state_path = /var/lib/watcher transport_url = ''rabbit://**********=1'' control_exchange = watcher debug = True ''#'' empty notification_level means that no notification will be sent notification_level = ''[database]'' connection = ''mysql+pymysql://watcher_95d6:0107df271c530ebe460a742c5441b891@openstack.watcher-kuttl-default.svc/watcher?read_default_file=/etc/my.cnf'' ''[oslo_policy]'' policy_file = /etc/watcher/policy.yaml.sample ''[oslo_messaging_notifications]'' driver = noop ''[oslo_messaging_rabbit]'' rabbit_quorum_queue=true rabbit_transient_quorum_queue=true amqp_durable_queues=true ''[keystone_authtoken]'' memcached_servers=memcached-0.memcached.watcher-kuttl-default.svc:11212 tls_enabled=true memcache_tls_certfile = /etc/pki/tls/certs/mtls.crt memcache_tls_keyfile = /etc/pki/tls/private/mtls.key memcache_tls_cafile = /etc/pki/tls/certs/mtls-ca.crt memcache_tls_enabled = true project_domain_name = Default project_name = service user_domain_name = Default password = pa**********rd username = watcher auth_url = https://keystone-internal.watcher-kuttl-default.svc:5000 interface = internal auth_type = password cafile = /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem ''[watcher_clients_auth]'' project_domain_name = Default project_name = service user_domain_name = Default password = pa**********rd username = watcher auth_url = https://keystone-internal.watcher-kuttl-default.svc:5000 interface = internal auth_type = password cafile = /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem ''[oslo_concurrency]'' lock_path = /var/lib/watcher/tmp ''[watcher_datasources]'' datasources = prometheus ''[cache]'' backend = oslo_cache.memcache_pool memcache_servers=memcached-0.memcached.watcher-kuttl-default.svc:11212 memcache_socket_timeout = 0.5 memcache_pool_connection_get_timeout = 1 enabled=true tls_enabled=true tls_certfile=/etc/pki/tls/certs/mtls.crt tls_keyfile=/etc/pki/tls/private/mtls.key tls_cafile=/etc/pki/tls/certs/mtls-ca.crt memcache_dead_retry = 30 ''[prometheus_client]'' host = metric-storage-prometheus.watcher-kuttl-default.svc port = 9090 cafile = /etc/pki/ca-trust/extracted/pem/prometheus/ca.crt ''[cinder_client]'' endpoint_type = internal ''[glance_client]'' endpoint_type = internal ''[ironic_client]'' endpoint_type = internal ''[keystone_client]'' interface = internal ''[neutron_client]'' endpoint_type = internal ''[nova_client]'' endpoint_type = internal ''[placement_client]'' interface = internal ''[watcher_cluster_data_model_collectors.compute]'' period = 900 ''[watcher_cluster_data_model_collectors.baremetal]'' period = 900 ''[watcher_cluster_data_model_collectors.storage]'' period = 900' - ' logger.go:42: 07:01:56 | watcher-tls/2-patch-mtls | + ''['' 1 == 1 '']''' - ' logger.go:42: 07:01:56 | watcher-tls/2-patch-mtls | ++ grep -c ''memcache_tls_enabled = true''' - ' logger.go:42: 07:01:56 | watcher-tls/2-patch-mtls | +++ oc rsh -c watcher-decision-engine watcher-kuttl-decision-engine-0 cat /etc/watcher/watcher.conf.d/00-default.conf' - ' logger.go:42: 07:01:56 | watcher-tls/2-patch-mtls | ++ echo ''[DEFAULT]'' state_path = /var/lib/watcher transport_url = ''rabbit://**********=1'' control_exchange = watcher debug = True ''#'' empty notification_level means that no notification will be sent notification_level = ''[database]'' connection = ''mysql+pymysql://watcher_95d6:0107df271c530ebe460a742c5441b891@openstack.watcher-kuttl-default.svc/watcher?read_default_file=/etc/my.cnf'' ''[oslo_policy]'' policy_file = /etc/watcher/policy.yaml.sample ''[oslo_messaging_notifications]'' driver = noop ''[oslo_messaging_rabbit]'' rabbit_quorum_queue=true rabbit_transient_quorum_queue=true amqp_durable_queues=true ''[keystone_authtoken]'' memcached_servers=memcached-0.memcached.watcher-kuttl-default.svc:11212 tls_enabled=true memcache_tls_certfile = /etc/pki/tls/certs/mtls.crt memcache_tls_keyfile = /etc/pki/tls/private/mtls.key memcache_tls_cafile = /etc/pki/tls/certs/mtls-ca.crt memcache_tls_enabled = true project_domain_name = Default project_name = service user_domain_name = Default password = pa**********rd username = watcher auth_url = https://keystone-internal.watcher-kuttl-default.svc:5000 interface = internal auth_type = password cafile = /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem ''[watcher_clients_auth]'' project_domain_name = Default project_name = service user_domain_name = Default password = pa**********rd username = watcher auth_url = https://keystone-internal.watcher-kuttl-default.svc:5000 interface = internal auth_type = password cafile = /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem ''[oslo_concurrency]'' lock_path = /var/lib/watcher/tmp ''[watcher_datasources]'' datasources = prometheus ''[cache]'' backend = oslo_cache.memcache_pool memcache_servers=memcached-0.memcached.watcher-kuttl-default.svc:11212 memcache_socket_timeout = 0.5 memcache_pool_connection_get_timeout = 1 enabled=true tls_enabled=true tls_certfile=/etc/pki/tls/certs/mtls.crt tls_keyfile=/etc/pki/tls/private/mtls.key tls_cafile=/etc/pki/tls/certs/mtls-ca.crt memcache_dead_retry = 30 ''[prometheus_client]'' host = metric-storage-prometheus.watcher-kuttl-default.svc port = 9090 cafile = /etc/pki/ca-trust/extracted/pem/prometheus/ca.crt ''[cinder_client]'' endpoint_type = internal ''[glance_client]'' endpoint_type = internal ''[ironic_client]'' endpoint_type = internal ''[keystone_client]'' interface = internal ''[neutron_client]'' endpoint_type = internal ''[nova_client]'' endpoint_type = internal ''[placement_client]'' interface = internal ''[watcher_cluster_data_model_collectors.compute]'' period = 900 ''[watcher_cluster_data_model_collectors.baremetal]'' period = 900 ''[watcher_cluster_data_model_collectors.storage]'' period = 900' - ' logger.go:42: 07:01:56 | watcher-tls/2-patch-mtls | + ''['' 1 == 1 '']''' - ' logger.go:42: 07:01:56 | watcher-tls/2-patch-mtls | ++ grep -c tls_certfile=/etc/pki/tls/certs/mtls.crt' - ' logger.go:42: 07:01:56 | watcher-tls/2-patch-mtls | +++ oc rsh -c watcher-decision-engine watcher-kuttl-decision-engine-0 cat /etc/watcher/watcher.conf.d/00-default.conf' - ' logger.go:42: 07:01:57 | watcher-tls/2-patch-mtls | ++ echo ''[DEFAULT]'' state_path = /var/lib/watcher transport_url = ''rabbit://**********=1'' control_exchange = watcher debug = True ''#'' empty notification_level means that no notification will be sent notification_level = ''[database]'' connection = ''mysql+pymysql://watcher_95d6:0107df271c530ebe460a742c5441b891@openstack.watcher-kuttl-default.svc/watcher?read_default_file=/etc/my.cnf'' ''[oslo_policy]'' policy_file = /etc/watcher/policy.yaml.sample ''[oslo_messaging_notifications]'' driver = noop ''[oslo_messaging_rabbit]'' rabbit_quorum_queue=true rabbit_transient_quorum_queue=true amqp_durable_queues=true ''[keystone_authtoken]'' memcached_servers=memcached-0.memcached.watcher-kuttl-default.svc:11212 tls_enabled=true memcache_tls_certfile = /etc/pki/tls/certs/mtls.crt memcache_tls_keyfile = /etc/pki/tls/private/mtls.key memcache_tls_cafile = /etc/pki/tls/certs/mtls-ca.crt memcache_tls_enabled = true project_domain_name = Default project_name = service user_domain_name = Default password = pa**********rd username = watcher auth_url = https://keystone-internal.watcher-kuttl-default.svc:5000 interface = internal auth_type = password cafile = /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem ''[watcher_clients_auth]'' project_domain_name = Default project_name = service user_domain_name = Default password = pa**********rd username = watcher auth_url = https://keystone-internal.watcher-kuttl-default.svc:5000 interface = internal auth_type = password cafile = /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem ''[oslo_concurrency]'' lock_path = /var/lib/watcher/tmp ''[watcher_datasources]'' datasources = prometheus ''[cache]'' backend = oslo_cache.memcache_pool memcache_servers=memcached-0.memcached.watcher-kuttl-default.svc:11212 memcache_socket_timeout = 0.5 memcache_pool_connection_get_timeout = 1 enabled=true tls_enabled=true tls_certfile=/etc/pki/tls/certs/mtls.crt tls_keyfile=/etc/pki/tls/private/mtls.key tls_cafile=/etc/pki/tls/certs/mtls-ca.crt memcache_dead_retry = 30 ''[prometheus_client]'' host = metric-storage-prometheus.watcher-kuttl-default.svc port = 9090 cafile = /etc/pki/ca-trust/extracted/pem/prometheus/ca.crt ''[cinder_client]'' endpoint_type = internal ''[glance_client]'' endpoint_type = internal ''[ironic_client]'' endpoint_type = internal ''[keystone_client]'' interface = internal ''[neutron_client]'' endpoint_type = internal ''[nova_client]'' endpoint_type = internal ''[placement_client]'' interface = internal ''[watcher_cluster_data_model_collectors.compute]'' period = 900 ''[watcher_cluster_data_model_collectors.baremetal]'' period = 900 ''[watcher_cluster_data_model_collectors.storage]'' period = 900' - ' logger.go:42: 07:01:57 | watcher-tls/2-patch-mtls | + ''['' 1 == 1 '']''' - ' logger.go:42: 07:01:57 | watcher-tls/2-patch-mtls | ++ grep -c tls_keyfile=/etc/pki/tls/private/mtls.key' - ' logger.go:42: 07:01:57 | watcher-tls/2-patch-mtls | +++ oc rsh -c watcher-decision-engine watcher-kuttl-decision-engine-0 cat /etc/watcher/watcher.conf.d/00-default.conf' - ' logger.go:42: 07:01:57 | watcher-tls/2-patch-mtls | ++ echo ''[DEFAULT]'' state_path = /var/lib/watcher transport_url = ''rabbit://**********=1'' control_exchange = watcher debug = True ''#'' empty notification_level means that no notification will be sent notification_level = ''[database]'' connection = ''mysql+pymysql://watcher_95d6:0107df271c530ebe460a742c5441b891@openstack.watcher-kuttl-default.svc/watcher?read_default_file=/etc/my.cnf'' ''[oslo_policy]'' policy_file = /etc/watcher/policy.yaml.sample ''[oslo_messaging_notifications]'' driver = noop ''[oslo_messaging_rabbit]'' rabbit_quorum_queue=true rabbit_transient_quorum_queue=true amqp_durable_queues=true ''[keystone_authtoken]'' memcached_servers=memcached-0.memcached.watcher-kuttl-default.svc:11212 tls_enabled=true memcache_tls_certfile = /etc/pki/tls/certs/mtls.crt memcache_tls_keyfile = /etc/pki/tls/private/mtls.key memcache_tls_cafile = /etc/pki/tls/certs/mtls-ca.crt memcache_tls_enabled = true project_domain_name = Default project_name = service user_domain_name = Default password = pa**********rd username = watcher auth_url = https://keystone-internal.watcher-kuttl-default.svc:5000 interface = internal auth_type = password cafile = /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem ''[watcher_clients_auth]'' project_domain_name = Default project_name = service user_domain_name = Default password = pa**********rd username = watcher auth_url = https://keystone-internal.watcher-kuttl-default.svc:5000 interface = internal auth_type = password cafile = /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem ''[oslo_concurrency]'' lock_path = /var/lib/watcher/tmp ''[watcher_datasources]'' datasources = prometheus ''[cache]'' backend = oslo_cache.memcache_pool memcache_servers=memcached-0.memcached.watcher-kuttl-default.svc:11212 memcache_socket_timeout = 0.5 memcache_pool_connection_get_timeout = 1 enabled=true tls_enabled=true tls_certfile=/etc/pki/tls/certs/mtls.crt tls_keyfile=/etc/pki/tls/private/mtls.key tls_cafile=/etc/pki/tls/certs/mtls-ca.crt memcache_dead_retry = 30 ''[prometheus_client]'' host = metric-storage-prometheus.watcher-kuttl-default.svc port = 9090 cafile = /etc/pki/ca-trust/extracted/pem/prometheus/ca.crt ''[cinder_client]'' endpoint_type = internal ''[glance_client]'' endpoint_type = internal ''[ironic_client]'' endpoint_type = internal ''[keystone_client]'' interface = internal ''[neutron_client]'' endpoint_type = internal ''[nova_client]'' endpoint_type = internal ''[placement_client]'' interface = internal ''[watcher_cluster_data_model_collectors.compute]'' period = 900 ''[watcher_cluster_data_model_collectors.baremetal]'' period = 900 ''[watcher_cluster_data_model_collectors.storage]'' period = 900' - ' logger.go:42: 07:01:57 | watcher-tls/2-patch-mtls | + ''['' 1 == 1 '']''' - ' logger.go:42: 07:01:57 | watcher-tls/2-patch-mtls | ++ grep -c tls_cafile=/etc/pki/tls/certs/mtls-ca.crt' - ' logger.go:42: 07:01:57 | watcher-tls/2-patch-mtls | +++ oc rsh -c watcher-decision-engine watcher-kuttl-decision-engine-0 cat /etc/watcher/watcher.conf.d/00-default.conf' - ' logger.go:42: 07:01:57 | watcher-tls/2-patch-mtls | ++ echo ''[DEFAULT]'' state_path = /var/lib/watcher transport_url = ''rabbit://**********=1'' control_exchange = watcher debug = True ''#'' empty notification_level means that no notification will be sent notification_level = ''[database]'' connection = ''mysql+pymysql://watcher_95d6:0107df271c530ebe460a742c5441b891@openstack.watcher-kuttl-default.svc/watcher?read_default_file=/etc/my.cnf'' ''[oslo_policy]'' policy_file = /etc/watcher/policy.yaml.sample ''[oslo_messaging_notifications]'' driver = noop ''[oslo_messaging_rabbit]'' rabbit_quorum_queue=true rabbit_transient_quorum_queue=true amqp_durable_queues=true ''[keystone_authtoken]'' memcached_servers=memcached-0.memcached.watcher-kuttl-default.svc:11212 tls_enabled=true memcache_tls_certfile = /etc/pki/tls/certs/mtls.crt memcache_tls_keyfile = /etc/pki/tls/private/mtls.key memcache_tls_cafile = /etc/pki/tls/certs/mtls-ca.crt memcache_tls_enabled = true project_domain_name = Default project_name = service user_domain_name = Default password = pa**********rd username = watcher auth_url = https://keystone-internal.watcher-kuttl-default.svc:5000 interface = internal auth_type = password cafile = /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem ''[watcher_clients_auth]'' project_domain_name = Default project_name = service user_domain_name = Default password = pa**********rd username = watcher auth_url = https://keystone-internal.watcher-kuttl-default.svc:5000 interface = internal auth_type = password cafile = /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem ''[oslo_concurrency]'' lock_path = /var/lib/watcher/tmp ''[watcher_datasources]'' datasources = prometheus ''[cache]'' backend = oslo_cache.memcache_pool memcache_servers=memcached-0.memcached.watcher-kuttl-default.svc:11212 memcache_socket_timeout = 0.5 memcache_pool_connection_get_timeout = 1 enabled=true tls_enabled=true tls_certfile=/etc/pki/tls/certs/mtls.crt tls_keyfile=/etc/pki/tls/private/mtls.key tls_cafile=/etc/pki/tls/certs/mtls-ca.crt memcache_dead_retry = 30 ''[prometheus_client]'' host = metric-storage-prometheus.watcher-kuttl-default.svc port = 9090 cafile = /etc/pki/ca-trust/extracted/pem/prometheus/ca.crt ''[cinder_client]'' endpoint_type = internal ''[glance_client]'' endpoint_type = internal ''[ironic_client]'' endpoint_type = internal ''[keystone_client]'' interface = internal ''[neutron_client]'' endpoint_type = internal ''[nova_client]'' endpoint_type = internal ''[placement_client]'' interface = internal ''[watcher_cluster_data_model_collectors.compute]'' period = 900 ''[watcher_cluster_data_model_collectors.baremetal]'' period = 900 ''[watcher_cluster_data_model_collectors.storage]'' period = 900' - ' logger.go:42: 07:01:57 | watcher-tls/2-patch-mtls | + ''['' 1 == 1 '']''' - ' logger.go:42: 07:01:57 | watcher-tls/2-patch-mtls | test step completed 2-patch-mtls' - ' logger.go:42: 07:01:57 | watcher-tls/3-disable-podlevel-tls | starting test step 3-disable-podlevel-tls' - ' logger.go:42: 07:01:57 | watcher-tls/3-disable-podlevel-tls | running command: [sh -c oc patch watcher -n $NAMESPACE watcher-kuttl --type=''json'' -p=''[{"op": "replace", "path": "/spec/apiServiceTemplate/tls/api", "value":{ "internal": {}, "public": {} }}]''' - ' ]' - ' logger.go:42: 07:01:57 | watcher-tls/3-disable-podlevel-tls | watcher.watcher.openstack.org/watcher-kuttl patched' - ' logger.go:42: 07:01:57 | watcher-tls/3-disable-podlevel-tls | running command: [sh -c set -euxo pipefail' - ' oc exec -n ${NAMESPACE} openstackclient -- openstack service list -f value -c Name -c Type |[ $(grep -c ^watcher) == 1 ]' - ' SERVICEID=$(oc exec -n ${NAMESPACE} openstackclient -- openstack service list -f value -c Name -c Type -c ID | grep watcher| awk ''{print $1}'')' - ' [ $(oc get -n ${NAMESPACE} keystoneservice watcher -o jsonpath={.status.serviceID}) == ${SERVICEID} ]' - ' [ -n "$(oc get -n ${NAMESPACE} watcher watcher-kuttl -o jsonpath={.status.hash.dbsync})" ]' - ' # check that watcher internal endpoint does not use https' - ' oc exec -n ${NAMESPACE} openstackclient -- openstack endpoint list | grep infra-optim | grep internal | [ $(grep -c https) == 0 ]' - ' # check that watcher public endpoint does use https' - ' oc exec -n ${NAMESPACE} openstackclient -- openstack endpoint list | grep infra-optim | grep public | [ $(grep -c https) == 1 ]' - ' # If we are running the container locally, skip following test' - ' if [ "$(oc get pods -n openstack-operators -o name -l openstack.org/operator-name=watcher)" == "" ]; then' - ' exit 0' - ' fi' - ' env_variables=$(oc set env $(oc get pods -n openstack-operators -o name -l openstack.org/operator-name=watcher) -n openstack-operators --list)' - ' counter=0' - ' for i in ${env_variables}; do' - ' if echo ${i} | grep ''_URL_DEFAULT'' &> /dev/null; then' - ' echo ${i}' - ' counter=$((counter + 1))' - ' fi' - ' done' - ' if [ ${counter} -lt 3 ]; then' - ' echo "Error: Less than 3 _URL_DEFAULT variables found."' - ' exit 1' - ' else' - ' echo "Success: ${counter} _URL_DEFAULT variables found."' - ' fi' - ' ]' - ' logger.go:42: 07:01:57 | watcher-tls/3-disable-podlevel-tls | + oc exec -n watcher-kuttl-default openstackclient -- openstack service list -f value -c Name -c Type' - ' logger.go:42: 07:01:57 | watcher-tls/3-disable-podlevel-tls | ++ grep -c ''^watcher''' - ' logger.go:42: 07:02:01 | watcher-tls/3-disable-podlevel-tls | + ''['' 1 == 1 '']''' - ' logger.go:42: 07:02:01 | watcher-tls/3-disable-podlevel-tls | ++ grep watcher' - ' logger.go:42: 07:02:01 | watcher-tls/3-disable-podlevel-tls | ++ oc exec -n watcher-kuttl-default openstackclient -- openstack service list -f value -c Name -c Type -c ID' - ' logger.go:42: 07:02:01 | watcher-tls/3-disable-podlevel-tls | ++ awk ''{print $1}''' - ' logger.go:42: 07:02:03 | watcher-tls/3-disable-podlevel-tls | + SERVICEID=95cae6c02c914be89de8a64359351912' - ' logger.go:42: 07:02:03 | watcher-tls/3-disable-podlevel-tls | ++ oc get -n watcher-kuttl-default keystoneservice watcher -o ''jsonpath={.status.serviceID}''' - ' logger.go:42: 07:02:03 | watcher-tls/3-disable-podlevel-tls | + ''['' 95cae6c02c914be89de8a64359351912 == 95cae6c02c914be89de8a64359351912 '']''' - ' logger.go:42: 07:02:03 | watcher-tls/3-disable-podlevel-tls | ++ oc get -n watcher-kuttl-default watcher watcher-kuttl -o ''jsonpath={.status.hash.dbsync}''' - ' logger.go:42: 07:02:04 | watcher-tls/3-disable-podlevel-tls | + ''['' -n nbbh56bh4h647h5fbh67dh699h5bdh85hb7h65fh676h555h9bh5c4hf9h684h55dh674h564h565h6fh578hd9h56bh5fchcch687h68fh568h688h69q '']''' - ' logger.go:42: 07:02:04 | watcher-tls/3-disable-podlevel-tls | + grep internal' - ' logger.go:42: 07:02:04 | watcher-tls/3-disable-podlevel-tls | + grep infra-optim' - ' logger.go:42: 07:02:04 | watcher-tls/3-disable-podlevel-tls | + oc exec -n watcher-kuttl-default openstackclient -- openstack endpoint list' - ' logger.go:42: 07:02:04 | watcher-tls/3-disable-podlevel-tls | ++ grep -c https' - ' logger.go:42: 07:02:06 | watcher-tls/3-disable-podlevel-tls | + ''['' 1 == 0 '']''' - ' logger.go:42: 07:02:07 | watcher-tls/3-disable-podlevel-tls | running command: [sh -c set -euxo pipefail' - ' oc exec -n ${NAMESPACE} openstackclient -- openstack service list -f value -c Name -c Type |[ $(grep -c ^watcher) == 1 ]' - ' SERVICEID=$(oc exec -n ${NAMESPACE} openstackclient -- openstack service list -f value -c Name -c Type -c ID | grep watcher| awk ''{print $1}'')' - ' [ $(oc get -n ${NAMESPACE} keystoneservice watcher -o jsonpath={.status.serviceID}) == ${SERVICEID} ]' - ' [ -n "$(oc get -n ${NAMESPACE} watcher watcher-kuttl -o jsonpath={.status.hash.dbsync})" ]' - ' # check that watcher internal endpoint does not use https' - ' oc exec -n ${NAMESPACE} openstackclient -- openstack endpoint list | grep infra-optim | grep internal | [ $(grep -c https) == 0 ]' - ' # check that watcher public endpoint does use https' - ' oc exec -n ${NAMESPACE} openstackclient -- openstack endpoint list | grep infra-optim | grep public | [ $(grep -c https) == 1 ]' - ' # If we are running the container locally, skip following test' - ' if [ "$(oc get pods -n openstack-operators -o name -l openstack.org/operator-name=watcher)" == "" ]; then' - ' exit 0' - ' fi' - ' env_variables=$(oc set env $(oc get pods -n openstack-operators -o name -l openstack.org/operator-name=watcher) -n openstack-operators --list)' - ' counter=0' - ' for i in ${env_variables}; do' - ' if echo ${i} | grep ''_URL_DEFAULT'' &> /dev/null; then' - ' echo ${i}' - ' counter=$((counter + 1))' - ' fi' - ' done' - ' if [ ${counter} -lt 3 ]; then' - ' echo "Error: Less than 3 _URL_DEFAULT variables found."' - ' exit 1' - ' else' - ' echo "Success: ${counter} _URL_DEFAULT variables found."' - ' fi' - ' ]' - ' logger.go:42: 07:02:07 | watcher-tls/3-disable-podlevel-tls | + oc exec -n watcher-kuttl-default openstackclient -- openstack service list -f value -c Name -c Type' - ' logger.go:42: 07:02:07 | watcher-tls/3-disable-podlevel-tls | ++ grep -c ''^watcher''' - ' logger.go:42: 07:02:09 | watcher-tls/3-disable-podlevel-tls | + ''['' 1 == 1 '']''' - ' logger.go:42: 07:02:09 | watcher-tls/3-disable-podlevel-tls | ++ oc exec -n watcher-kuttl-default openstackclient -- openstack service list -f value -c Name -c Type -c ID' - ' logger.go:42: 07:02:09 | watcher-tls/3-disable-podlevel-tls | ++ awk ''{print $1}''' - ' logger.go:42: 07:02:09 | watcher-tls/3-disable-podlevel-tls | ++ grep watcher' - ' logger.go:42: 07:02:12 | watcher-tls/3-disable-podlevel-tls | + SERVICEID=95cae6c02c914be89de8a64359351912' - ' logger.go:42: 07:02:12 | watcher-tls/3-disable-podlevel-tls | ++ oc get -n watcher-kuttl-default keystoneservice watcher -o ''jsonpath={.status.serviceID}''' - ' logger.go:42: 07:02:12 | watcher-tls/3-disable-podlevel-tls | + ''['' 95cae6c02c914be89de8a64359351912 == 95cae6c02c914be89de8a64359351912 '']''' - ' logger.go:42: 07:02:12 | watcher-tls/3-disable-podlevel-tls | ++ oc get -n watcher-kuttl-default watcher watcher-kuttl -o ''jsonpath={.status.hash.dbsync}''' - ' logger.go:42: 07:02:12 | watcher-tls/3-disable-podlevel-tls | + ''['' -n nbbh56bh4h647h5fbh67dh699h5bdh85hb7h65fh676h555h9bh5c4hf9h684h55dh674h564h565h6fh578hd9h56bh5fchcch687h68fh568h688h69q '']''' - ' logger.go:42: 07:02:12 | watcher-tls/3-disable-podlevel-tls | + grep internal' - ' logger.go:42: 07:02:12 | watcher-tls/3-disable-podlevel-tls | ++ grep -c https' - ' logger.go:42: 07:02:12 | watcher-tls/3-disable-podlevel-tls | + grep infra-optim' - ' logger.go:42: 07:02:12 | watcher-tls/3-disable-podlevel-tls | + oc exec -n watcher-kuttl-default openstackclient -- openstack endpoint list' - ' logger.go:42: 07:02:14 | watcher-tls/3-disable-podlevel-tls | + ''['' 1 == 0 '']''' - ' logger.go:42: 07:02:15 | watcher-tls/3-disable-podlevel-tls | running command: [sh -c set -euxo pipefail' - ' oc exec -n ${NAMESPACE} openstackclient -- openstack service list -f value -c Name -c Type |[ $(grep -c ^watcher) == 1 ]' - ' SERVICEID=$(oc exec -n ${NAMESPACE} openstackclient -- openstack service list -f value -c Name -c Type -c ID | grep watcher| awk ''{print $1}'')' - ' [ $(oc get -n ${NAMESPACE} keystoneservice watcher -o jsonpath={.status.serviceID}) == ${SERVICEID} ]' - ' [ -n "$(oc get -n ${NAMESPACE} watcher watcher-kuttl -o jsonpath={.status.hash.dbsync})" ]' - ' # check that watcher internal endpoint does not use https' - ' oc exec -n ${NAMESPACE} openstackclient -- openstack endpoint list | grep infra-optim | grep internal | [ $(grep -c https) == 0 ]' - ' # check that watcher public endpoint does use https' - ' oc exec -n ${NAMESPACE} openstackclient -- openstack endpoint list | grep infra-optim | grep public | [ $(grep -c https) == 1 ]' - ' # If we are running the container locally, skip following test' - ' if [ "$(oc get pods -n openstack-operators -o name -l openstack.org/operator-name=watcher)" == "" ]; then' - ' exit 0' - ' fi' - ' env_variables=$(oc set env $(oc get pods -n openstack-operators -o name -l openstack.org/operator-name=watcher) -n openstack-operators --list)' - ' counter=0' - ' for i in ${env_variables}; do' - ' if echo ${i} | grep ''_URL_DEFAULT'' &> /dev/null; then' - ' echo ${i}' - ' counter=$((counter + 1))' - ' fi' - ' done' - ' if [ ${counter} -lt 3 ]; then' - ' echo "Error: Less than 3 _URL_DEFAULT variables found."' - ' exit 1' - ' else' - ' echo "Success: ${counter} _URL_DEFAULT variables found."' - ' fi' - ' ]' - ' logger.go:42: 07:02:15 | watcher-tls/3-disable-podlevel-tls | + oc exec -n watcher-kuttl-default openstackclient -- openstack service list -f value -c Name -c Type' - ' logger.go:42: 07:02:15 | watcher-tls/3-disable-podlevel-tls | ++ grep -c ''^watcher''' - ' logger.go:42: 07:02:18 | watcher-tls/3-disable-podlevel-tls | + ''['' 1 == 1 '']''' - ' logger.go:42: 07:02:18 | watcher-tls/3-disable-podlevel-tls | ++ oc exec -n watcher-kuttl-default openstackclient -- openstack service list -f value -c Name -c Type -c ID' - ' logger.go:42: 07:02:18 | watcher-tls/3-disable-podlevel-tls | ++ awk ''{print $1}''' - ' logger.go:42: 07:02:18 | watcher-tls/3-disable-podlevel-tls | ++ grep watcher' - ' logger.go:42: 07:02:20 | watcher-tls/3-disable-podlevel-tls | + SERVICEID=95cae6c02c914be89de8a64359351912' - ' logger.go:42: 07:02:20 | watcher-tls/3-disable-podlevel-tls | ++ oc get -n watcher-kuttl-default keystoneservice watcher -o ''jsonpath={.status.serviceID}''' - ' logger.go:42: 07:02:20 | watcher-tls/3-disable-podlevel-tls | + ''['' 95cae6c02c914be89de8a64359351912 == 95cae6c02c914be89de8a64359351912 '']''' - ' logger.go:42: 07:02:20 | watcher-tls/3-disable-podlevel-tls | ++ oc get -n watcher-kuttl-default watcher watcher-kuttl -o ''jsonpath={.status.hash.dbsync}''' - ' logger.go:42: 07:02:20 | watcher-tls/3-disable-podlevel-tls | + ''['' -n nbbh56bh4h647h5fbh67dh699h5bdh85hb7h65fh676h555h9bh5c4hf9h684h55dh674h564h565h6fh578hd9h56bh5fchcch687h68fh568h688h69q '']''' - ' logger.go:42: 07:02:20 | watcher-tls/3-disable-podlevel-tls | + oc exec -n watcher-kuttl-default openstackclient -- openstack endpoint list' - ' logger.go:42: 07:02:20 | watcher-tls/3-disable-podlevel-tls | + grep internal' - ' logger.go:42: 07:02:20 | watcher-tls/3-disable-podlevel-tls | + grep infra-optim' - ' logger.go:42: 07:02:20 | watcher-tls/3-disable-podlevel-tls | ++ grep -c https' - ' logger.go:42: 07:02:23 | watcher-tls/3-disable-podlevel-tls | + ''['' 1 == 0 '']''' - ' logger.go:42: 07:02:24 | watcher-tls/3-disable-podlevel-tls | running command: [sh -c set -euxo pipefail' - ' oc exec -n ${NAMESPACE} openstackclient -- openstack service list -f value -c Name -c Type |[ $(grep -c ^watcher) == 1 ]' - ' SERVICEID=$(oc exec -n ${NAMESPACE} openstackclient -- openstack service list -f value -c Name -c Type -c ID | grep watcher| awk ''{print $1}'')' - ' [ $(oc get -n ${NAMESPACE} keystoneservice watcher -o jsonpath={.status.serviceID}) == ${SERVICEID} ]' - ' [ -n "$(oc get -n ${NAMESPACE} watcher watcher-kuttl -o jsonpath={.status.hash.dbsync})" ]' - ' # check that watcher internal endpoint does not use https' - ' oc exec -n ${NAMESPACE} openstackclient -- openstack endpoint list | grep infra-optim | grep internal | [ $(grep -c https) == 0 ]' - ' # check that watcher public endpoint does use https' - ' oc exec -n ${NAMESPACE} openstackclient -- openstack endpoint list | grep infra-optim | grep public | [ $(grep -c https) == 1 ]' - ' # If we are running the container locally, skip following test' - ' if [ "$(oc get pods -n openstack-operators -o name -l openstack.org/operator-name=watcher)" == "" ]; then' - ' exit 0' - ' fi' - ' env_variables=$(oc set env $(oc get pods -n openstack-operators -o name -l openstack.org/operator-name=watcher) -n openstack-operators --list)' - ' counter=0' - ' for i in ${env_variables}; do' - ' if echo ${i} | grep ''_URL_DEFAULT'' &> /dev/null; then' - ' echo ${i}' - ' counter=$((counter + 1))' - ' fi' - ' done' - ' if [ ${counter} -lt 3 ]; then' - ' echo "Error: Less than 3 _URL_DEFAULT variables found."' - ' exit 1' - ' else' - ' echo "Success: ${counter} _URL_DEFAULT variables found."' - ' fi' - ' ]' - ' logger.go:42: 07:02:24 | watcher-tls/3-disable-podlevel-tls | + oc exec -n watcher-kuttl-default openstackclient -- openstack service list -f value -c Name -c Type' - ' logger.go:42: 07:02:24 | watcher-tls/3-disable-podlevel-tls | ++ grep -c ''^watcher''' - ' logger.go:42: 07:02:30 | watcher-tls/3-disable-podlevel-tls | + ''['' 1 == 1 '']''' - ' logger.go:42: 07:02:30 | watcher-tls/3-disable-podlevel-tls | ++ oc exec -n watcher-kuttl-default openstackclient -- openstack service list -f value -c Name -c Type -c ID' - ' logger.go:42: 07:02:30 | watcher-tls/3-disable-podlevel-tls | ++ awk ''{print $1}''' - ' logger.go:42: 07:02:30 | watcher-tls/3-disable-podlevel-tls | ++ grep watcher' - ' logger.go:42: 07:02:33 | watcher-tls/3-disable-podlevel-tls | + SERVICEID=95cae6c02c914be89de8a64359351912' - ' logger.go:42: 07:02:33 | watcher-tls/3-disable-podlevel-tls | ++ oc get -n watcher-kuttl-default keystoneservice watcher -o ''jsonpath={.status.serviceID}''' - ' logger.go:42: 07:02:33 | watcher-tls/3-disable-podlevel-tls | + ''['' 95cae6c02c914be89de8a64359351912 == 95cae6c02c914be89de8a64359351912 '']''' - ' logger.go:42: 07:02:33 | watcher-tls/3-disable-podlevel-tls | ++ oc get -n watcher-kuttl-default watcher watcher-kuttl -o ''jsonpath={.status.hash.dbsync}''' - ' logger.go:42: 07:02:33 | watcher-tls/3-disable-podlevel-tls | + ''['' -n nbbh56bh4h647h5fbh67dh699h5bdh85hb7h65fh676h555h9bh5c4hf9h684h55dh674h564h565h6fh578hd9h56bh5fchcch687h68fh568h688h69q '']''' - ' logger.go:42: 07:02:33 | watcher-tls/3-disable-podlevel-tls | + oc exec -n watcher-kuttl-default openstackclient -- openstack endpoint list' - ' logger.go:42: 07:02:33 | watcher-tls/3-disable-podlevel-tls | + grep infra-optim' - ' logger.go:42: 07:02:33 | watcher-tls/3-disable-podlevel-tls | + grep internal' - ' logger.go:42: 07:02:33 | watcher-tls/3-disable-podlevel-tls | ++ grep -c https' - ' logger.go:42: 07:02:36 | watcher-tls/3-disable-podlevel-tls | + ''['' 0 == 0 '']''' - ' logger.go:42: 07:02:36 | watcher-tls/3-disable-podlevel-tls | + oc exec -n watcher-kuttl-default openstackclient -- openstack endpoint list' - ' logger.go:42: 07:02:36 | watcher-tls/3-disable-podlevel-tls | + grep public' - ' logger.go:42: 07:02:36 | watcher-tls/3-disable-podlevel-tls | + grep infra-optim' - ' logger.go:42: 07:02:36 | watcher-tls/3-disable-podlevel-tls | ++ grep -c https' - ' logger.go:42: 07:02:38 | watcher-tls/3-disable-podlevel-tls | + ''['' 1 == 1 '']''' - ' logger.go:42: 07:02:38 | watcher-tls/3-disable-podlevel-tls | ++ oc get pods -n openstack-operators -o name -l openstack.org/operator-name=watcher' - ' logger.go:42: 07:02:38 | watcher-tls/3-disable-podlevel-tls | + ''['' '''' == '''' '']''' - ' logger.go:42: 07:02:38 | watcher-tls/3-disable-podlevel-tls | + exit 0' - ' logger.go:42: 07:02:39 | watcher-tls/3-disable-podlevel-tls | running command: [sh -c set -euxo pipefail' - ' oc exec -n ${NAMESPACE} openstackclient -- openstack service list -f value -c Name -c Type |[ $(grep -c ^watcher) == 1 ]' - ' SERVICEID=$(oc exec -n ${NAMESPACE} openstackclient -- openstack service list -f value -c Name -c Type -c ID | grep watcher| awk ''{print $1}'')' - ' [ $(oc get -n ${NAMESPACE} keystoneservice watcher -o jsonpath={.status.serviceID}) == ${SERVICEID} ]' - ' [ -n "$(oc get -n ${NAMESPACE} watcher watcher-kuttl -o jsonpath={.status.hash.dbsync})" ]' - ' # check that watcher internal endpoint does not use https' - ' oc exec -n ${NAMESPACE} openstackclient -- openstack endpoint list | grep infra-optim | grep internal | [ $(grep -c https) == 0 ]' - ' # check that watcher public endpoint does use https' - ' oc exec -n ${NAMESPACE} openstackclient -- openstack endpoint list | grep infra-optim | grep public | [ $(grep -c https) == 1 ]' - ' # If we are running the container locally, skip following test' - ' if [ "$(oc get pods -n openstack-operators -o name -l openstack.org/operator-name=watcher)" == "" ]; then' - ' exit 0' - ' fi' - ' env_variables=$(oc set env $(oc get pods -n openstack-operators -o name -l openstack.org/operator-name=watcher) -n openstack-operators --list)' - ' counter=0' - ' for i in ${env_variables}; do' - ' if echo ${i} | grep ''_URL_DEFAULT'' &> /dev/null; then' - ' echo ${i}' - ' counter=$((counter + 1))' - ' fi' - ' done' - ' if [ ${counter} -lt 3 ]; then' - ' echo "Error: Less than 3 _URL_DEFAULT variables found."' - ' exit 1' - ' else' - ' echo "Success: ${counter} _URL_DEFAULT variables found."' - ' fi' - ' ]' - ' logger.go:42: 07:02:39 | watcher-tls/3-disable-podlevel-tls | + oc exec -n watcher-kuttl-default openstackclient -- openstack service list -f value -c Name -c Type' - ' logger.go:42: 07:02:39 | watcher-tls/3-disable-podlevel-tls | ++ grep -c ''^watcher''' - ' logger.go:42: 07:02:41 | watcher-tls/3-disable-podlevel-tls | + ''['' 1 == 1 '']''' - ' logger.go:42: 07:02:41 | watcher-tls/3-disable-podlevel-tls | ++ oc exec -n watcher-kuttl-default openstackclient -- openstack service list -f value -c Name -c Type -c ID' - ' logger.go:42: 07:02:41 | watcher-tls/3-disable-podlevel-tls | ++ awk ''{print $1}''' - ' logger.go:42: 07:02:42 | watcher-tls/3-disable-podlevel-tls | ++ grep watcher' - ' logger.go:42: 07:02:44 | watcher-tls/3-disable-podlevel-tls | + SERVICEID=95cae6c02c914be89de8a64359351912' - ' logger.go:42: 07:02:44 | watcher-tls/3-disable-podlevel-tls | ++ oc get -n watcher-kuttl-default keystoneservice watcher -o ''jsonpath={.status.serviceID}''' - ' logger.go:42: 07:02:44 | watcher-tls/3-disable-podlevel-tls | + ''['' 95cae6c02c914be89de8a64359351912 == 95cae6c02c914be89de8a64359351912 '']''' - ' logger.go:42: 07:02:44 | watcher-tls/3-disable-podlevel-tls | ++ oc get -n watcher-kuttl-default watcher watcher-kuttl -o ''jsonpath={.status.hash.dbsync}''' - ' logger.go:42: 07:02:44 | watcher-tls/3-disable-podlevel-tls | + ''['' -n nbbh56bh4h647h5fbh67dh699h5bdh85hb7h65fh676h555h9bh5c4hf9h684h55dh674h564h565h6fh578hd9h56bh5fchcch687h68fh568h688h69q '']''' - ' logger.go:42: 07:02:44 | watcher-tls/3-disable-podlevel-tls | + oc exec -n watcher-kuttl-default openstackclient -- openstack endpoint list' - ' logger.go:42: 07:02:44 | watcher-tls/3-disable-podlevel-tls | + grep internal' - ' logger.go:42: 07:02:44 | watcher-tls/3-disable-podlevel-tls | + grep infra-optim' - ' logger.go:42: 07:02:44 | watcher-tls/3-disable-podlevel-tls | ++ grep -c https' - ' logger.go:42: 07:02:46 | watcher-tls/3-disable-podlevel-tls | + ''['' 0 == 0 '']''' - ' logger.go:42: 07:02:46 | watcher-tls/3-disable-podlevel-tls | + oc exec -n watcher-kuttl-default openstackclient -- openstack endpoint list' - ' logger.go:42: 07:02:46 | watcher-tls/3-disable-podlevel-tls | + grep public' - ' logger.go:42: 07:02:46 | watcher-tls/3-disable-podlevel-tls | + grep infra-optim' - ' logger.go:42: 07:02:46 | watcher-tls/3-disable-podlevel-tls | ++ grep -c https' - ' logger.go:42: 07:02:48 | watcher-tls/3-disable-podlevel-tls | + ''['' 1 == 1 '']''' - ' logger.go:42: 07:02:48 | watcher-tls/3-disable-podlevel-tls | ++ oc get pods -n openstack-operators -o name -l openstack.org/operator-name=watcher' - ' logger.go:42: 07:02:49 | watcher-tls/3-disable-podlevel-tls | + ''['' '''' == '''' '']''' - ' logger.go:42: 07:02:49 | watcher-tls/3-disable-podlevel-tls | + exit 0' - ' logger.go:42: 07:02:49 | watcher-tls/3-disable-podlevel-tls | test step completed 3-disable-podlevel-tls' - ' logger.go:42: 07:02:49 | watcher-tls/4-deploy-without-route | starting test step 4-deploy-without-route' - ' logger.go:42: 07:02:49 | watcher-tls/4-deploy-without-route | running command: [sh -c oc patch watcher -n $NAMESPACE watcher-kuttl --type=''json'' -p=''[{"op": "replace", "path": "/spec/apiServiceTemplate/override", "value":{"service": { "internal": {}, "public": { "metadata": { "annotations": { "metallb.universe.tf/address-pool": "ctlplane", "metallb.universe.tf/allow-shared-ip": "ctlplane" } }, "spec": { "type": "LoadBalancer" } } } }}]''' - ' ]' - ' logger.go:42: 07:02:49 | watcher-tls/4-deploy-without-route | watcher.watcher.openstack.org/watcher-kuttl patched' - ' logger.go:42: 07:02:49 | watcher-tls/4-deploy-without-route | running command: [sh -c set -euxo pipefail' - ' oc exec -n ${NAMESPACE} openstackclient -- openstack service list -f value -c Name -c Type |[ $(grep -c ^watcher) == 1 ]' - ' SERVICEID=$(oc exec -n ${NAMESPACE} openstackclient -- openstack service list -f value -c Name -c Type -c ID | grep watcher| awk ''{print $1}'')' - ' [ $(oc get -n ${NAMESPACE} keystoneservice watcher -o jsonpath={.status.serviceID}) == ${SERVICEID} ]' - ' [ -n "$(oc get -n ${NAMESPACE} watcher watcher-kuttl -o jsonpath={.status.hash.dbsync})" ]' - ' # If we are running the container locally, skip following test' - ' if [ "$(oc get pods -n openstack-operators -o name -l openstack.org/operator-name=watcher)" == "" ]; then' - ' exit 0' - ' fi' - ' env_variables=$(oc set env $(oc get pods -n openstack-operators -o name -l openstack.org/operator-name=watcher) -n openstack-operators --list)' - ' counter=0' - ' for i in ${env_variables}; do' - ' if echo ${i} | grep ''_URL_DEFAULT'' &> /dev/null; then' - ' echo ${i}' - ' counter=$((counter + 1))' - ' fi' - ' done' - ' if [ ${counter} -lt 3 ]; then' - ' echo "Error: Less than 3 _URL_DEFAULT variables found."' - ' exit 1' - ' else' - ' echo "Success: ${counter} _URL_DEFAULT variables found."' - ' fi' - ' ]' - ' logger.go:42: 07:02:49 | watcher-tls/4-deploy-without-route | + oc exec -n watcher-kuttl-default openstackclient -- openstack service list -f value -c Name -c Type' - ' logger.go:42: 07:02:49 | watcher-tls/4-deploy-without-route | ++ grep -c ''^watcher''' - ' logger.go:42: 07:02:51 | watcher-tls/4-deploy-without-route | + ''['' 1 == 1 '']''' - ' logger.go:42: 07:02:51 | watcher-tls/4-deploy-without-route | ++ oc exec -n watcher-kuttl-default openstackclient -- openstack service list -f value -c Name -c Type -c ID' - ' logger.go:42: 07:02:51 | watcher-tls/4-deploy-without-route | ++ awk ''{print $1}''' - ' logger.go:42: 07:02:51 | watcher-tls/4-deploy-without-route | ++ grep watcher' - ' logger.go:42: 07:02:54 | watcher-tls/4-deploy-without-route | + SERVICEID=95cae6c02c914be89de8a64359351912' - ' logger.go:42: 07:02:54 | watcher-tls/4-deploy-without-route | ++ oc get -n watcher-kuttl-default keystoneservice watcher -o ''jsonpath={.status.serviceID}''' - ' logger.go:42: 07:02:54 | watcher-tls/4-deploy-without-route | + ''['' 95cae6c02c914be89de8a64359351912 == 95cae6c02c914be89de8a64359351912 '']''' - ' logger.go:42: 07:02:54 | watcher-tls/4-deploy-without-route | ++ oc get -n watcher-kuttl-default watcher watcher-kuttl -o ''jsonpath={.status.hash.dbsync}''' - ' logger.go:42: 07:02:54 | watcher-tls/4-deploy-without-route | + ''['' -n nbbh56bh4h647h5fbh67dh699h5bdh85hb7h65fh676h555h9bh5c4hf9h684h55dh674h564h565h6fh578hd9h56bh5fchcch687h68fh568h688h69q '']''' - ' logger.go:42: 07:02:54 | watcher-tls/4-deploy-without-route | ++ oc get pods -n openstack-operators -o name -l openstack.org/operator-name=watcher' - ' logger.go:42: 07:02:54 | watcher-tls/4-deploy-without-route | + ''['' '''' == '''' '']''' - ' logger.go:42: 07:02:54 | watcher-tls/4-deploy-without-route | + exit 0' - ' logger.go:42: 07:02:55 | watcher-tls/4-deploy-without-route | running command: [sh -c set -euxo pipefail' - ' oc exec -n ${NAMESPACE} openstackclient -- openstack service list -f value -c Name -c Type |[ $(grep -c ^watcher) == 1 ]' - ' SERVICEID=$(oc exec -n ${NAMESPACE} openstackclient -- openstack service list -f value -c Name -c Type -c ID | grep watcher| awk ''{print $1}'')' - ' [ $(oc get -n ${NAMESPACE} keystoneservice watcher -o jsonpath={.status.serviceID}) == ${SERVICEID} ]' - ' [ -n "$(oc get -n ${NAMESPACE} watcher watcher-kuttl -o jsonpath={.status.hash.dbsync})" ]' - ' # If we are running the container locally, skip following test' - ' if [ "$(oc get pods -n openstack-operators -o name -l openstack.org/operator-name=watcher)" == "" ]; then' - ' exit 0' - ' fi' - ' env_variables=$(oc set env $(oc get pods -n openstack-operators -o name -l openstack.org/operator-name=watcher) -n openstack-operators --list)' - ' counter=0' - ' for i in ${env_variables}; do' - ' if echo ${i} | grep ''_URL_DEFAULT'' &> /dev/null; then' - ' echo ${i}' - ' counter=$((counter + 1))' - ' fi' - ' done' - ' if [ ${counter} -lt 3 ]; then' - ' echo "Error: Less than 3 _URL_DEFAULT variables found."' - ' exit 1' - ' else' - ' echo "Success: ${counter} _URL_DEFAULT variables found."' - ' fi' - ' ]' - ' logger.go:42: 07:02:55 | watcher-tls/4-deploy-without-route | + oc exec -n watcher-kuttl-default openstackclient -- openstack service list -f value -c Name -c Type' - ' logger.go:42: 07:02:55 | watcher-tls/4-deploy-without-route | ++ grep -c ''^watcher''' - ' logger.go:42: 07:02:57 | watcher-tls/4-deploy-without-route | + ''['' 1 == 1 '']''' - ' logger.go:42: 07:02:57 | watcher-tls/4-deploy-without-route | ++ oc exec -n watcher-kuttl-default openstackclient -- openstack service list -f value -c Name -c Type -c ID' - ' logger.go:42: 07:02:57 | watcher-tls/4-deploy-without-route | ++ awk ''{print $1}''' - ' logger.go:42: 07:02:57 | watcher-tls/4-deploy-without-route | ++ grep watcher' - ' logger.go:42: 07:03:00 | watcher-tls/4-deploy-without-route | + SERVICEID=95cae6c02c914be89de8a64359351912' - ' logger.go:42: 07:03:00 | watcher-tls/4-deploy-without-route | ++ oc get -n watcher-kuttl-default keystoneservice watcher -o ''jsonpath={.status.serviceID}''' - ' logger.go:42: 07:03:00 | watcher-tls/4-deploy-without-route | + ''['' 95cae6c02c914be89de8a64359351912 == 95cae6c02c914be89de8a64359351912 '']''' - ' logger.go:42: 07:03:00 | watcher-tls/4-deploy-without-route | ++ oc get -n watcher-kuttl-default watcher watcher-kuttl -o ''jsonpath={.status.hash.dbsync}''' - ' logger.go:42: 07:03:00 | watcher-tls/4-deploy-without-route | + ''['' -n nbbh56bh4h647h5fbh67dh699h5bdh85hb7h65fh676h555h9bh5c4hf9h684h55dh674h564h565h6fh578hd9h56bh5fchcch687h68fh568h688h69q '']''' - ' logger.go:42: 07:03:00 | watcher-tls/4-deploy-without-route | ++ oc get pods -n openstack-operators -o name -l openstack.org/operator-name=watcher' - ' logger.go:42: 07:03:00 | watcher-tls/4-deploy-without-route | + ''['' '''' == '''' '']''' - ' logger.go:42: 07:03:00 | watcher-tls/4-deploy-without-route | + exit 0' - ' logger.go:42: 07:03:00 | watcher-tls/4-deploy-without-route | test step completed 4-deploy-without-route' - ' logger.go:42: 07:03:00 | watcher-tls/5-disable-tls | starting test step 5-disable-tls' - ' logger.go:42: 07:03:00 | watcher-tls/5-disable-tls | running command: [sh -c oc patch watcher -n $NAMESPACE watcher-kuttl --type=''json'' -p=''[{"op": "replace", "path": "/spec/apiServiceTemplate/override", "value":{}}]''' - ' ]' - ' logger.go:42: 07:03:00 | watcher-tls/5-disable-tls | watcher.watcher.openstack.org/watcher-kuttl patched' - ' logger.go:42: 07:03:00 | watcher-tls/5-disable-tls | running command: [sh -c set -euxo pipefail' - ' oc exec -n ${NAMESPACE} openstackclient -- openstack service list -f value -c Name -c Type |[ $(grep -c ^watcher) == 1 ]' - ' SERVICEID=$(oc exec -n ${NAMESPACE} openstackclient -- openstack service list -f value -c Name -c Type -c ID | grep watcher| awk ''{print $1}'')' - ' [ $(oc get -n ${NAMESPACE} keystoneservice watcher -o jsonpath={.status.serviceID}) == ${SERVICEID} ]' - ' [ -n "$(oc get -n ${NAMESPACE} watcher watcher-kuttl -o jsonpath={.status.hash.dbsync})" ]' - ' # If we are running the container locally, skip following test' - ' if [ "$(oc get pods -n openstack-operators -o name -l openstack.org/operator-name=watcher)" == "" ]; then' - ' exit 0' - ' fi' - ' env_variables=$(oc set env $(oc get pods -n openstack-operators -o name -l openstack.org/operator-name=watcher) -n openstack-operators --list)' - ' counter=0' - ' for i in ${env_variables}; do' - ' if echo ${i} | grep ''_URL_DEFAULT'' &> /dev/null; then' - ' echo ${i}' - ' counter=$((counter + 1))' - ' fi' - ' done' - ' if [ ${counter} -lt 3 ]; then' - ' echo "Error: Less than 3 _URL_DEFAULT variables found."' - ' exit 1' - ' else' - ' echo "Success: ${counter} _URL_DEFAULT variables found."' - ' fi' - ' # check that no watcher endpoint uses https' - ' oc exec -n ${NAMESPACE} openstackclient -- openstack endpoint list | grep infra-optim | [ $(grep -c https) == 0 ]' - ' ]' - ' logger.go:42: 07:03:00 | watcher-tls/5-disable-tls | + oc exec -n watcher-kuttl-default openstackclient -- openstack service list -f value -c Name -c Type' - ' logger.go:42: 07:03:00 | watcher-tls/5-disable-tls | ++ grep -c ''^watcher''' - ' logger.go:42: 07:03:03 | watcher-tls/5-disable-tls | + ''['' 1 == 1 '']''' - ' logger.go:42: 07:03:03 | watcher-tls/5-disable-tls | ++ oc exec -n watcher-kuttl-default openstackclient -- openstack service list -f value -c Name -c Type -c ID' - ' logger.go:42: 07:03:03 | watcher-tls/5-disable-tls | ++ awk ''{print $1}''' - ' logger.go:42: 07:03:03 | watcher-tls/5-disable-tls | ++ grep watcher' - ' logger.go:42: 07:03:05 | watcher-tls/5-disable-tls | + SERVICEID=95cae6c02c914be89de8a64359351912' - ' logger.go:42: 07:03:05 | watcher-tls/5-disable-tls | ++ oc get -n watcher-kuttl-default keystoneservice watcher -o ''jsonpath={.status.serviceID}''' - ' logger.go:42: 07:03:05 | watcher-tls/5-disable-tls | + ''['' 95cae6c02c914be89de8a64359351912 == 95cae6c02c914be89de8a64359351912 '']''' - ' logger.go:42: 07:03:05 | watcher-tls/5-disable-tls | ++ oc get -n watcher-kuttl-default watcher watcher-kuttl -o ''jsonpath={.status.hash.dbsync}''' - ' logger.go:42: 07:03:06 | watcher-tls/5-disable-tls | + ''['' -n nbbh56bh4h647h5fbh67dh699h5bdh85hb7h65fh676h555h9bh5c4hf9h684h55dh674h564h565h6fh578hd9h56bh5fchcch687h68fh568h688h69q '']''' - ' logger.go:42: 07:03:06 | watcher-tls/5-disable-tls | ++ oc get pods -n openstack-operators -o name -l openstack.org/operator-name=watcher' - ' logger.go:42: 07:03:06 | watcher-tls/5-disable-tls | + ''['' '''' == '''' '']''' - ' logger.go:42: 07:03:06 | watcher-tls/5-disable-tls | + exit 0' - ' logger.go:42: 07:03:07 | watcher-tls/5-disable-tls | running command: [sh -c set -euxo pipefail' - ' oc exec -n ${NAMESPACE} openstackclient -- openstack service list -f value -c Name -c Type |[ $(grep -c ^watcher) == 1 ]' - ' SERVICEID=$(oc exec -n ${NAMESPACE} openstackclient -- openstack service list -f value -c Name -c Type -c ID | grep watcher| awk ''{print $1}'')' - ' [ $(oc get -n ${NAMESPACE} keystoneservice watcher -o jsonpath={.status.serviceID}) == ${SERVICEID} ]' - ' [ -n "$(oc get -n ${NAMESPACE} watcher watcher-kuttl -o jsonpath={.status.hash.dbsync})" ]' - ' # If we are running the container locally, skip following test' - ' if [ "$(oc get pods -n openstack-operators -o name -l openstack.org/operator-name=watcher)" == "" ]; then' - ' exit 0' - ' fi' - ' env_variables=$(oc set env $(oc get pods -n openstack-operators -o name -l openstack.org/operator-name=watcher) -n openstack-operators --list)' - ' counter=0' - ' for i in ${env_variables}; do' - ' if echo ${i} | grep ''_URL_DEFAULT'' &> /dev/null; then' - ' echo ${i}' - ' counter=$((counter + 1))' - ' fi' - ' done' - ' if [ ${counter} -lt 3 ]; then' - ' echo "Error: Less than 3 _URL_DEFAULT variables found."' - ' exit 1' - ' else' - ' echo "Success: ${counter} _URL_DEFAULT variables found."' - ' fi' - ' # check that no watcher endpoint uses https' - ' oc exec -n ${NAMESPACE} openstackclient -- openstack endpoint list | grep infra-optim | [ $(grep -c https) == 0 ]' - ' ]' - ' logger.go:42: 07:03:07 | watcher-tls/5-disable-tls | + oc exec -n watcher-kuttl-default openstackclient -- openstack service list -f value -c Name -c Type' - ' logger.go:42: 07:03:07 | watcher-tls/5-disable-tls | ++ grep -c ''^watcher''' - ' logger.go:42: 07:03:09 | watcher-tls/5-disable-tls | + ''['' 1 == 1 '']''' - ' logger.go:42: 07:03:09 | watcher-tls/5-disable-tls | ++ grep watcher' - ' logger.go:42: 07:03:09 | watcher-tls/5-disable-tls | ++ oc exec -n watcher-kuttl-default openstackclient -- openstack service list -f value -c Name -c Type -c ID' - ' logger.go:42: 07:03:09 | watcher-tls/5-disable-tls | ++ awk ''{print $1}''' - ' logger.go:42: 07:03:11 | watcher-tls/5-disable-tls | + SERVICEID=95cae6c02c914be89de8a64359351912' - ' logger.go:42: 07:03:11 | watcher-tls/5-disable-tls | ++ oc get -n watcher-kuttl-default keystoneservice watcher -o ''jsonpath={.status.serviceID}''' - ' logger.go:42: 07:03:11 | watcher-tls/5-disable-tls | + ''['' 95cae6c02c914be89de8a64359351912 == 95cae6c02c914be89de8a64359351912 '']''' - ' logger.go:42: 07:03:11 | watcher-tls/5-disable-tls | ++ oc get -n watcher-kuttl-default watcher watcher-kuttl -o ''jsonpath={.status.hash.dbsync}''' - ' logger.go:42: 07:03:12 | watcher-tls/5-disable-tls | + ''['' -n nbbh56bh4h647h5fbh67dh699h5bdh85hb7h65fh676h555h9bh5c4hf9h684h55dh674h564h565h6fh578hd9h56bh5fchcch687h68fh568h688h69q '']''' - ' logger.go:42: 07:03:12 | watcher-tls/5-disable-tls | ++ oc get pods -n openstack-operators -o name -l openstack.org/operator-name=watcher' - ' logger.go:42: 07:03:12 | watcher-tls/5-disable-tls | + ''['' '''' == '''' '']''' - ' logger.go:42: 07:03:12 | watcher-tls/5-disable-tls | + exit 0' - ' logger.go:42: 07:03:12 | watcher-tls/5-disable-tls | test step completed 5-disable-tls' - ' logger.go:42: 07:03:12 | watcher-tls/6-cleanup-watcher | starting test step 6-cleanup-watcher' - ' logger.go:42: 07:03:12 | watcher-tls/6-cleanup-watcher | running command: [sh -c set -ex' - ' oc exec -n watcher-kuttl-default openstackclient -- openstack service list -f value -c Name -c Type | [ $(grep -c ^watcher) == 0 ]' - ' ]' - ' logger.go:42: 07:03:12 | watcher-tls/6-cleanup-watcher | + oc exec -n watcher-kuttl-default openstackclient -- openstack service list -f value -c Name -c Type' - ' logger.go:42: 07:03:12 | watcher-tls/6-cleanup-watcher | ++ grep -c ''^watcher''' - ' logger.go:42: 07:03:15 | watcher-tls/6-cleanup-watcher | + ''['' 1 == 0 '']''' - ' logger.go:42: 07:03:16 | watcher-tls/6-cleanup-watcher | running command: [sh -c set -ex' - ' oc exec -n watcher-kuttl-default openstackclient -- openstack service list -f value -c Name -c Type | [ $(grep -c ^watcher) == 0 ]' - ' ]' - ' logger.go:42: 07:03:16 | watcher-tls/6-cleanup-watcher | + oc exec -n watcher-kuttl-default openstackclient -- openstack service list -f value -c Name -c Type' - ' logger.go:42: 07:03:16 | watcher-tls/6-cleanup-watcher | ++ grep -c ''^watcher''' - ' logger.go:42: 07:03:19 | watcher-tls/6-cleanup-watcher | + ''['' 0 == 0 '']''' - ' logger.go:42: 07:03:20 | watcher-tls/6-cleanup-watcher | running command: [sh -c set -ex' - ' oc exec -n watcher-kuttl-default openstackclient -- openstack service list -f value -c Name -c Type | [ $(grep -c ^watcher) == 0 ]' - ' ]' - ' logger.go:42: 07:03:20 | watcher-tls/6-cleanup-watcher | + oc exec -n watcher-kuttl-default openstackclient -- openstack service list -f value -c Name -c Type' - ' logger.go:42: 07:03:20 | watcher-tls/6-cleanup-watcher | ++ grep -c ''^watcher''' - ' logger.go:42: 07:03:23 | watcher-tls/6-cleanup-watcher | + ''['' 0 == 0 '']''' - ' logger.go:42: 07:03:24 | watcher-tls/6-cleanup-watcher | running command: [sh -c set -ex' - ' oc exec -n watcher-kuttl-default openstackclient -- openstack service list -f value -c Name -c Type | [ $(grep -c ^watcher) == 0 ]' - ' ]' - ' logger.go:42: 07:03:24 | watcher-tls/6-cleanup-watcher | + oc exec -n watcher-kuttl-default openstackclient -- openstack service list -f value -c Name -c Type' - ' logger.go:42: 07:03:24 | watcher-tls/6-cleanup-watcher | ++ grep -c ''^watcher''' - ' logger.go:42: 07:03:26 | watcher-tls/6-cleanup-watcher | + ''['' 0 == 0 '']''' - ' logger.go:42: 07:03:26 | watcher-tls/6-cleanup-watcher | test step completed 6-cleanup-watcher' - ' logger.go:42: 07:03:26 | watcher-tls/7-cleanup-certs | starting test step 7-cleanup-certs' - ' logger.go:42: 07:03:26 | watcher-tls/7-cleanup-certs | test step completed 7-cleanup-certs' - ' logger.go:42: 07:03:26 | watcher-tls | skipping kubernetes event logging' - === CONT kuttl/harness/watcher-rmquser - ' logger.go:42: 07:03:26 | watcher-rmquser | Skipping creation of user-supplied namespace: watcher-kuttl-default' - ' logger.go:42: 07:03:26 | watcher-rmquser/0-cleanup-watcher | starting test step 0-cleanup-watcher' - ' logger.go:42: 07:03:26 | watcher-rmquser/0-cleanup-watcher | test step completed 0-cleanup-watcher' - ' logger.go:42: 07:03:26 | watcher-rmquser/1-deploy | starting test step 1-deploy' - ' logger.go:42: 07:03:26 | watcher-rmquser/1-deploy | Watcher:watcher-kuttl-default/watcher-kuttl created' - ' logger.go:42: 07:03:26 | watcher-rmquser/1-deploy | running command: [sh -c set -euxo pipefail' - ' ' - ' # Wait for Watcher to be Ready' - ' kubectl wait --for=condition=Ready watcher/watcher-kuttl -n $NAMESPACE --timeout=300s' - ' ' - ' # Verify WatcherNotificationTransportURLReady condition exists and is True' - ' kubectl get watcher watcher-kuttl -n $NAMESPACE -o jsonpath=''{.status.conditions[?(@.type=="WatcherNotificationTransportURLReady")].status}'' | grep -q "True"' - ' echo "WatcherNotificationTransportURLReady condition is True"' - ' ' - ' # Count TransportURL CRs - should be exactly 2 (one for messaging, one for notifications)' - ' transport_count=$(kubectl get transporturl -n $NAMESPACE -o name | grep "watcher-kuttl-watcher-transport" | wc -l)' - ' notification_transport_count=$(kubectl get transporturl -n $NAMESPACE -o name | grep "watcher-kuttl-watcher-notification" | wc -l)' - ' ' - ' if [ "$transport_count" -ne "1" ]; then' - ' echo "Expected 1 watcher-transport TransportURL, found $transport_count"' - ' exit 1' - ' fi' - ' ' - ' if [ "$notification_transport_count" -ne "1" ]; then' - ' echo "Expected 1 notification-transport TransportURL, found $notification_transport_count"' - ' exit 1' - ' fi' - ' ' - ' echo "Correctly found 2 TransportURLs (separate clusters: transport and notification)"' - ' ' - ' # Verify watcher-transport has correct user and vhost' - ' transport_user=$(kubectl get transporturl watcher-kuttl-watcher-transport -n $NAMESPACE -o jsonpath=''{.spec.username}'')' - ' transport_vhost=$(kubectl get transporturl watcher-kuttl-watcher-transport -n $NAMESPACE -o jsonpath=''{.spec.vhost}'')' - ' if [ "$transport_user" != "watcher-rpc" ]; then' - ' echo "Expected watcher-transport username ''watcher-rpc'', found ''$transport_user''"' - ' exit 1' - ' fi' - ' if [ "$transport_vhost" != "watcher-rpc" ]; then' - ' echo "Expected watcher-transport vhost ''watcher-rpc'', found ''$transport_vhost''"' - ' exit 1' - ' fi' - ' echo "Watcher transport has correct user (watcher-rpc) and vhost (watcher-rpc)"' - ' ' - ' # Verify notification-transport has correct user and vhost' - ' notif_user=$(kubectl get transporturl watcher-kuttl-watcher-notification-rabbitmq-notifications -n $NAMESPACE -o jsonpath=''{.spec.username}'')' - ' notif_vhost=$(kubectl get transporturl watcher-kuttl-watcher-notification-rabbitmq-notifications -n $NAMESPACE -o jsonpath=''{.spec.vhost}'')' - ' if [ "$notif_user" != "watcher-notifications" ]; then' - ' echo "Expected notification-transport username ''watcher-notifications'', found ''$notif_user''"' - ' exit 1' - ' fi' - ' if [ "$notif_vhost" != "watcher-notifications" ]; then' - ' echo "Expected notification-transport vhost ''watcher-notifications'', found ''$notif_vhost''"' - ' exit 1' - ' fi' - ' echo "Notification transport has correct user (watcher-notifications) and vhost (watcher-notifications)"' - ' ' - ' # Verify that watcher.conf contains the notifications transport_url' - ' WATCHER_API_POD=$(kubectl get pods -n $NAMESPACE -l "service=watcher-api" -o custom-columns=:metadata.name --no-headers | grep -v ^$ | head -1)' - ' if [ -z "${WATCHER_API_POD}" ]; then' - ' echo "No watcher-api pod found"' - ' exit 1' - ' fi' - ' # Verify RPC transport_url in DEFAULT section' - ' rpc_transport_url=$(kubectl exec -n $NAMESPACE ${WATCHER_API_POD} -c watcher-api -- cat /etc/watcher/watcher.conf.d/00-default.conf | grep -E ''^\[DEFAULT\]'' -A 50 | grep ''transport_url'' | head -1 || true)' - ' if [ -z "$rpc_transport_url" ]; then' - ' echo "transport_url not found in DEFAULT section"' - ' exit 1' - ' fi' - ' echo "Found RPC transport_url: $rpc_transport_url"' - ' ' - ' # Verify the RPC transport_url contains the correct vhost (watcher-rpc)' - ' if ! echo "$rpc_transport_url" | grep -q ''/watcher-rpc''; then' - ' echo "RPC transport_url does not contain expected vhost ''/watcher-rpc''"' - ' exit 1' - ' fi' - ' echo "Successfully verified vhost ''watcher-rpc'' in RPC transport_url"' - ' ' - ' # Verify the RPC transport_url contains the correct username (watcher-rpc)' - ' if ! echo "$rpc_transport_url" | grep -q ''watcher-rpc:''; then' - ' echo "RPC transport_url does not contain expected username ''watcher-rpc:''"' - ' exit 1' - ' fi' - ' echo "Successfully verified username ''watcher-rpc'' in RPC transport_url"' - ' ' - ' # Verify oslo_messaging_notifications section has transport_url configured' - ' notif_transport_url=$(kubectl exec -n $NAMESPACE ${WATCHER_API_POD} -c watcher-api -- cat /etc/watcher/watcher.conf.d/00-default.conf | grep -A 5 ''\[oslo_messaging_notifications\]'' | grep ''transport_url'' || true)' - ' if [ -z "$notif_transport_url" ]; then' - ' echo "transport_url not found in oslo_messaging_notifications section"' - ' exit 1' - ' fi' - ' echo "Found notifications transport_url: $notif_transport_url"' - ' ' - ' # Verify the notifications transport_url contains the correct vhost (watcher-notifications)' - ' if ! echo "$notif_transport_url" | grep -q ''/watcher-notifications''; then' - ' echo "Notifications transport_url does not contain expected vhost ''/watcher-notifications''"' - ' exit 1' - ' fi' - ' echo "Successfully verified vhost ''watcher-notifications'' in notifications transport_url"' - ' ' - ' # Verify the notifications transport_url contains the correct username (watcher-notifications)' - ' if ! echo "$notif_transport_url" | grep -q ''watcher-notifications:''; then' - ' echo "Notifications transport_url does not contain expected username ''watcher-notifications:''"' - ' exit 1' - ' fi' - ' echo "Successfully verified username ''watcher-notifications'' in notifications transport_url"' - ' ' - ' exit 0' - ' ]' - ' logger.go:42: 07:03:26 | watcher-rmquser/1-deploy | + kubectl wait --for=condition=Ready watcher/watcher-kuttl -n watcher-kuttl-default --timeout=300s' - ' logger.go:42: 07:08:26 | watcher-rmquser/1-deploy | error: timed out waiting for the condition on watchers/watcher-kuttl' - ' logger.go:42: 07:08:26 | watcher-rmquser/1-deploy | test step failed 1-deploy' - ' case.go:396: failed in step 1-deploy' - ' case.go:398: transporturls.rabbitmq.openstack.org "watcher-kuttl-watcher-transport" not found' - ' case.go:398: transporturls.rabbitmq.openstack.org "watcher-kuttl-watcher-notification-rabbitmq-notifications" not found' - ' case.go:398: command "kubectl wait --for=condition=Ready watcher/watcher-kuttl -n $NAMESP..." exceeded 300 sec timeout, context deadline exceeded' - ' logger.go:42: 07:08:26 | watcher-rmquser | skipping kubernetes event logging' - === CONT kuttl/harness/watcher-api-scaling - ' logger.go:42: 07:08:26 | watcher-api-scaling | Skipping creation of user-supplied namespace: watcher-kuttl-default' - ' logger.go:42: 07:08:26 | watcher-api-scaling/0-cleanup-watcher | starting test step 0-cleanup-watcher' - ' logger.go:42: 07:08:27 | watcher-api-scaling/0-cleanup-watcher | test step completed 0-cleanup-watcher' - ' logger.go:42: 07:08:27 | watcher-api-scaling/1-deploy-with-defaults | starting test step 1-deploy-with-defaults' - ' logger.go:42: 07:08:27 | watcher-api-scaling/1-deploy-with-defaults | Watcher:watcher-kuttl-default/watcher-kuttl created' - ' logger.go:42: 07:08:27 | watcher-api-scaling/1-deploy-with-defaults | running command: [sh -c set -euxo pipefail' - ' oc exec -n watcher-kuttl-default openstackclient -- openstack service list -f value -c Name -c Type |[ $(grep -c ^watcher) == 1 ]' - ' SERVICEID=$(oc exec -n watcher-kuttl-default openstackclient -- openstack service list -f value -c Name -c Type -c ID | grep watcher| awk ''{print $1}'')' - ' [ $(oc get -n watcher-kuttl-default keystoneservice watcher -o jsonpath={.status.serviceID}) == ${SERVICEID} ]' - ' [ -n "$(oc get -n watcher-kuttl-default watcher watcher-kuttl -o jsonpath={.status.hash.dbsync})" ]' - ' # If we are running the container locally, skip following test' - ' if [ "$(oc get pods -n openstack-operators -o name -l openstack.org/operator-name=watcher)" == "" ]; then' - ' exit 0' - ' fi' - ' env_variables=$(oc set env $(oc get pods -n openstack-operators -o name -l openstack.org/operator-name=watcher) -n openstack-operators --list)' - ' counter=0' - ' for i in ${env_variables}; do' - ' if echo ${i} | grep ''_URL_DEFAULT'' &> /dev/null; then' - ' echo ${i}' - ' counter=$((counter + 1))' - ' fi' - ' done' - ' if [ ${counter} -lt 3 ]; then' - ' echo "Error: Less than 3 _URL_DEFAULT variables found."' - ' exit 1' - ' else' - ' echo "Success: ${counter} _URL_DEFAULT variables found."' - ' fi' - ' ]' - ' logger.go:42: 07:08:27 | watcher-api-scaling/1-deploy-with-defaults | + oc exec -n watcher-kuttl-default openstackclient -- openstack service list -f value -c Name -c Type' - ' logger.go:42: 07:08:27 | watcher-api-scaling/1-deploy-with-defaults | ++ grep -c ''^watcher''' - ' logger.go:42: 07:08:29 | watcher-api-scaling/1-deploy-with-defaults | + ''['' 0 == 1 '']''' - ' logger.go:42: 07:08:30 | watcher-api-scaling/1-deploy-with-defaults | running command: [sh -c set -euxo pipefail' - ' oc exec -n watcher-kuttl-default openstackclient -- openstack service list -f value -c Name -c Type |[ $(grep -c ^watcher) == 1 ]' - ' SERVICEID=$(oc exec -n watcher-kuttl-default openstackclient -- openstack service list -f value -c Name -c Type -c ID | grep watcher| awk ''{print $1}'')' - ' [ $(oc get -n watcher-kuttl-default keystoneservice watcher -o jsonpath={.status.serviceID}) == ${SERVICEID} ]' - ' [ -n "$(oc get -n watcher-kuttl-default watcher watcher-kuttl -o jsonpath={.status.hash.dbsync})" ]' - ' # If we are running the container locally, skip following test' - ' if [ "$(oc get pods -n openstack-operators -o name -l openstack.org/operator-name=watcher)" == "" ]; then' - ' exit 0' - ' fi' - ' env_variables=$(oc set env $(oc get pods -n openstack-operators -o name -l openstack.org/operator-name=watcher) -n openstack-operators --list)' - ' counter=0' - ' for i in ${env_variables}; do' - ' if echo ${i} | grep ''_URL_DEFAULT'' &> /dev/null; then' - ' echo ${i}' - ' counter=$((counter + 1))' - ' fi' - ' done' - ' if [ ${counter} -lt 3 ]; then' - ' echo "Error: Less than 3 _URL_DEFAULT variables found."' - ' exit 1' - ' else' - ' echo "Success: ${counter} _URL_DEFAULT variables found."' - ' fi' - ' ]' - ' logger.go:42: 07:08:30 | watcher-api-scaling/1-deploy-with-defaults | + oc exec -n watcher-kuttl-default openstackclient -- openstack service list -f value -c Name -c Type' - ' logger.go:42: 07:08:30 | watcher-api-scaling/1-deploy-with-defaults | ++ grep -c ''^watcher''' - ' logger.go:42: 07:08:34 | watcher-api-scaling/1-deploy-with-defaults | + ''['' 0 == 1 '']''' - ' logger.go:42: 07:08:35 | watcher-api-scaling/1-deploy-with-defaults | running command: [sh -c set -euxo pipefail' - ' oc exec -n watcher-kuttl-default openstackclient -- openstack service list -f value -c Name -c Type |[ $(grep -c ^watcher) == 1 ]' - ' SERVICEID=$(oc exec -n watcher-kuttl-default openstackclient -- openstack service list -f value -c Name -c Type -c ID | grep watcher| awk ''{print $1}'')' - ' [ $(oc get -n watcher-kuttl-default keystoneservice watcher -o jsonpath={.status.serviceID}) == ${SERVICEID} ]' - ' [ -n "$(oc get -n watcher-kuttl-default watcher watcher-kuttl -o jsonpath={.status.hash.dbsync})" ]' - ' # If we are running the container locally, skip following test' - ' if [ "$(oc get pods -n openstack-operators -o name -l openstack.org/operator-name=watcher)" == "" ]; then' - ' exit 0' - ' fi' - ' env_variables=$(oc set env $(oc get pods -n openstack-operators -o name -l openstack.org/operator-name=watcher) -n openstack-operators --list)' - ' counter=0' - ' for i in ${env_variables}; do' - ' if echo ${i} | grep ''_URL_DEFAULT'' &> /dev/null; then' - ' echo ${i}' - ' counter=$((counter + 1))' - ' fi' - ' done' - ' if [ ${counter} -lt 3 ]; then' - ' echo "Error: Less than 3 _URL_DEFAULT variables found."' - ' exit 1' - ' else' - ' echo "Success: ${counter} _URL_DEFAULT variables found."' - ' fi' - ' ]' - ' logger.go:42: 07:08:35 | watcher-api-scaling/1-deploy-with-defaults | + oc exec -n watcher-kuttl-default openstackclient -- openstack service list -f value -c Name -c Type' - ' logger.go:42: 07:08:35 | watcher-api-scaling/1-deploy-with-defaults | ++ grep -c ''^watcher''' - ' logger.go:42: 07:08:37 | watcher-api-scaling/1-deploy-with-defaults | + ''['' 0 == 1 '']''' - ' logger.go:42: 07:08:38 | watcher-api-scaling/1-deploy-with-defaults | running command: [sh -c set -euxo pipefail' - ' oc exec -n watcher-kuttl-default openstackclient -- openstack service list -f value -c Name -c Type |[ $(grep -c ^watcher) == 1 ]' - ' SERVICEID=$(oc exec -n watcher-kuttl-default openstackclient -- openstack service list -f value -c Name -c Type -c ID | grep watcher| awk ''{print $1}'')' - ' [ $(oc get -n watcher-kuttl-default keystoneservice watcher -o jsonpath={.status.serviceID}) == ${SERVICEID} ]' - ' [ -n "$(oc get -n watcher-kuttl-default watcher watcher-kuttl -o jsonpath={.status.hash.dbsync})" ]' - ' # If we are running the container locally, skip following test' - ' if [ "$(oc get pods -n openstack-operators -o name -l openstack.org/operator-name=watcher)" == "" ]; then' - ' exit 0' - ' fi' - ' env_variables=$(oc set env $(oc get pods -n openstack-operators -o name -l openstack.org/operator-name=watcher) -n openstack-operators --list)' - ' counter=0' - ' for i in ${env_variables}; do' - ' if echo ${i} | grep ''_URL_DEFAULT'' &> /dev/null; then' - ' echo ${i}' - ' counter=$((counter + 1))' - ' fi' - ' done' - ' if [ ${counter} -lt 3 ]; then' - ' echo "Error: Less than 3 _URL_DEFAULT variables found."' - ' exit 1' - ' else' - ' echo "Success: ${counter} _URL_DEFAULT variables found."' - ' fi' - ' ]' - ' logger.go:42: 07:08:38 | watcher-api-scaling/1-deploy-with-defaults | + oc exec -n watcher-kuttl-default openstackclient -- openstack service list -f value -c Name -c Type' - ' logger.go:42: 07:08:38 | watcher-api-scaling/1-deploy-with-defaults | ++ grep -c ''^watcher''' - ' logger.go:42: 07:08:41 | watcher-api-scaling/1-deploy-with-defaults | + ''['' 1 == 1 '']''' - ' logger.go:42: 07:08:41 | watcher-api-scaling/1-deploy-with-defaults | ++ grep watcher' - ' logger.go:42: 07:08:41 | watcher-api-scaling/1-deploy-with-defaults | ++ awk ''{print $1}''' - ' logger.go:42: 07:08:41 | watcher-api-scaling/1-deploy-with-defaults | ++ oc exec -n watcher-kuttl-default openstackclient -- openstack service list -f value -c Name -c Type -c ID' - ' logger.go:42: 07:08:43 | watcher-api-scaling/1-deploy-with-defaults | + SERVICEID=7bb23bb8113c4d3cac4445d032b0decb' - ' logger.go:42: 07:08:43 | watcher-api-scaling/1-deploy-with-defaults | ++ oc get -n watcher-kuttl-default keystoneservice watcher -o ''jsonpath={.status.serviceID}''' - ' logger.go:42: 07:08:43 | watcher-api-scaling/1-deploy-with-defaults | + ''['' 7bb23bb8113c4d3cac4445d032b0decb == 7bb23bb8113c4d3cac4445d032b0decb '']''' - ' logger.go:42: 07:08:43 | watcher-api-scaling/1-deploy-with-defaults | ++ oc get -n watcher-kuttl-default watcher watcher-kuttl -o ''jsonpath={.status.hash.dbsync}''' - ' logger.go:42: 07:08:43 | watcher-api-scaling/1-deploy-with-defaults | + ''['' -n '''' '']''' - ' logger.go:42: 07:08:44 | watcher-api-scaling/1-deploy-with-defaults | running command: [sh -c set -euxo pipefail' - ' oc exec -n watcher-kuttl-default openstackclient -- openstack service list -f value -c Name -c Type |[ $(grep -c ^watcher) == 1 ]' - ' SERVICEID=$(oc exec -n watcher-kuttl-default openstackclient -- openstack service list -f value -c Name -c Type -c ID | grep watcher| awk ''{print $1}'')' - ' [ $(oc get -n watcher-kuttl-default keystoneservice watcher -o jsonpath={.status.serviceID}) == ${SERVICEID} ]' - ' [ -n "$(oc get -n watcher-kuttl-default watcher watcher-kuttl -o jsonpath={.status.hash.dbsync})" ]' - ' # If we are running the container locally, skip following test' - ' if [ "$(oc get pods -n openstack-operators -o name -l openstack.org/operator-name=watcher)" == "" ]; then' - ' exit 0' - ' fi' - ' env_variables=$(oc set env $(oc get pods -n openstack-operators -o name -l openstack.org/operator-name=watcher) -n openstack-operators --list)' - ' counter=0' - ' for i in ${env_variables}; do' - ' if echo ${i} | grep ''_URL_DEFAULT'' &> /dev/null; then' - ' echo ${i}' - ' counter=$((counter + 1))' - ' fi' - ' done' - ' if [ ${counter} -lt 3 ]; then' - ' echo "Error: Less than 3 _URL_DEFAULT variables found."' - ' exit 1' - ' else' - ' echo "Success: ${counter} _URL_DEFAULT variables found."' - ' fi' - ' ]' - ' logger.go:42: 07:08:44 | watcher-api-scaling/1-deploy-with-defaults | + oc exec -n watcher-kuttl-default openstackclient -- openstack service list -f value -c Name -c Type' - ' logger.go:42: 07:08:44 | watcher-api-scaling/1-deploy-with-defaults | ++ grep -c ''^watcher''' - ' logger.go:42: 07:08:48 | watcher-api-scaling/1-deploy-with-defaults | + ''['' 1 == 1 '']''' - ' logger.go:42: 07:08:48 | watcher-api-scaling/1-deploy-with-defaults | ++ oc exec -n watcher-kuttl-default openstackclient -- openstack service list -f value -c Name -c Type -c ID' - ' logger.go:42: 07:08:48 | watcher-api-scaling/1-deploy-with-defaults | ++ awk ''{print $1}''' - ' logger.go:42: 07:08:48 | watcher-api-scaling/1-deploy-with-defaults | ++ grep watcher' - ' logger.go:42: 07:08:50 | watcher-api-scaling/1-deploy-with-defaults | + SERVICEID=7bb23bb8113c4d3cac4445d032b0decb' - ' logger.go:42: 07:08:50 | watcher-api-scaling/1-deploy-with-defaults | ++ oc get -n watcher-kuttl-default keystoneservice watcher -o ''jsonpath={.status.serviceID}''' - ' logger.go:42: 07:08:50 | watcher-api-scaling/1-deploy-with-defaults | + ''['' 7bb23bb8113c4d3cac4445d032b0decb == 7bb23bb8113c4d3cac4445d032b0decb '']''' - ' logger.go:42: 07:08:50 | watcher-api-scaling/1-deploy-with-defaults | ++ oc get -n watcher-kuttl-default watcher watcher-kuttl -o ''jsonpath={.status.hash.dbsync}''' - ' logger.go:42: 07:08:50 | watcher-api-scaling/1-deploy-with-defaults | + ''['' -n nbbh56bh4h647h5fbh67dh699h5bdh85hb7h65fh676h555h9bh5c4hf9h684h55dh674h564h565h6fh578hd9h56bh5fchcch687h68fh568h688h69q '']''' - ' logger.go:42: 07:08:50 | watcher-api-scaling/1-deploy-with-defaults | ++ oc get pods -n openstack-operators -o name -l openstack.org/operator-name=watcher' - ' logger.go:42: 07:08:51 | watcher-api-scaling/1-deploy-with-defaults | + ''['' '''' == '''' '']''' - ' logger.go:42: 07:08:51 | watcher-api-scaling/1-deploy-with-defaults | + exit 0' - ' logger.go:42: 07:08:52 | watcher-api-scaling/1-deploy-with-defaults | running command: [sh -c set -euxo pipefail' - ' oc exec -n watcher-kuttl-default openstackclient -- openstack service list -f value -c Name -c Type |[ $(grep -c ^watcher) == 1 ]' - ' SERVICEID=$(oc exec -n watcher-kuttl-default openstackclient -- openstack service list -f value -c Name -c Type -c ID | grep watcher| awk ''{print $1}'')' - ' [ $(oc get -n watcher-kuttl-default keystoneservice watcher -o jsonpath={.status.serviceID}) == ${SERVICEID} ]' - ' [ -n "$(oc get -n watcher-kuttl-default watcher watcher-kuttl -o jsonpath={.status.hash.dbsync})" ]' - ' # If we are running the container locally, skip following test' - ' if [ "$(oc get pods -n openstack-operators -o name -l openstack.org/operator-name=watcher)" == "" ]; then' - ' exit 0' - ' fi' - ' env_variables=$(oc set env $(oc get pods -n openstack-operators -o name -l openstack.org/operator-name=watcher) -n openstack-operators --list)' - ' counter=0' - ' for i in ${env_variables}; do' - ' if echo ${i} | grep ''_URL_DEFAULT'' &> /dev/null; then' - ' echo ${i}' - ' counter=$((counter + 1))' - ' fi' - ' done' - ' if [ ${counter} -lt 3 ]; then' - ' echo "Error: Less than 3 _URL_DEFAULT variables found."' - ' exit 1' - ' else' - ' echo "Success: ${counter} _URL_DEFAULT variables found."' - ' fi' - ' ]' - ' logger.go:42: 07:08:52 | watcher-api-scaling/1-deploy-with-defaults | + oc exec -n watcher-kuttl-default openstackclient -- openstack service list -f value -c Name -c Type' - ' logger.go:42: 07:08:52 | watcher-api-scaling/1-deploy-with-defaults | ++ grep -c ''^watcher''' - ' logger.go:42: 07:08:54 | watcher-api-scaling/1-deploy-with-defaults | + ''['' 1 == 1 '']''' - ' logger.go:42: 07:08:54 | watcher-api-scaling/1-deploy-with-defaults | ++ oc exec -n watcher-kuttl-default openstackclient -- openstack service list -f value -c Name -c Type -c ID' - ' logger.go:42: 07:08:54 | watcher-api-scaling/1-deploy-with-defaults | ++ awk ''{print $1}''' - ' logger.go:42: 07:08:54 | watcher-api-scaling/1-deploy-with-defaults | ++ grep watcher' - ' logger.go:42: 07:08:56 | watcher-api-scaling/1-deploy-with-defaults | + SERVICEID=7bb23bb8113c4d3cac4445d032b0decb' - ' logger.go:42: 07:08:56 | watcher-api-scaling/1-deploy-with-defaults | ++ oc get -n watcher-kuttl-default keystoneservice watcher -o ''jsonpath={.status.serviceID}''' - ' logger.go:42: 07:08:57 | watcher-api-scaling/1-deploy-with-defaults | + ''['' 7bb23bb8113c4d3cac4445d032b0decb == 7bb23bb8113c4d3cac4445d032b0decb '']''' - ' logger.go:42: 07:08:57 | watcher-api-scaling/1-deploy-with-defaults | ++ oc get -n watcher-kuttl-default watcher watcher-kuttl -o ''jsonpath={.status.hash.dbsync}''' - ' logger.go:42: 07:08:57 | watcher-api-scaling/1-deploy-with-defaults | + ''['' -n nbbh56bh4h647h5fbh67dh699h5bdh85hb7h65fh676h555h9bh5c4hf9h684h55dh674h564h565h6fh578hd9h56bh5fchcch687h68fh568h688h69q '']''' - ' logger.go:42: 07:08:57 | watcher-api-scaling/1-deploy-with-defaults | ++ oc get pods -n openstack-operators -o name -l openstack.org/operator-name=watcher' - ' logger.go:42: 07:08:57 | watcher-api-scaling/1-deploy-with-defaults | + ''['' '''' == '''' '']''' - ' logger.go:42: 07:08:57 | watcher-api-scaling/1-deploy-with-defaults | + exit 0' - ' logger.go:42: 07:08:58 | watcher-api-scaling/1-deploy-with-defaults | running command: [sh -c set -euxo pipefail' - ' oc exec -n watcher-kuttl-default openstackclient -- openstack service list -f value -c Name -c Type |[ $(grep -c ^watcher) == 1 ]' - ' SERVICEID=$(oc exec -n watcher-kuttl-default openstackclient -- openstack service list -f value -c Name -c Type -c ID | grep watcher| awk ''{print $1}'')' - ' [ $(oc get -n watcher-kuttl-default keystoneservice watcher -o jsonpath={.status.serviceID}) == ${SERVICEID} ]' - ' [ -n "$(oc get -n watcher-kuttl-default watcher watcher-kuttl -o jsonpath={.status.hash.dbsync})" ]' - ' # If we are running the container locally, skip following test' - ' if [ "$(oc get pods -n openstack-operators -o name -l openstack.org/operator-name=watcher)" == "" ]; then' - ' exit 0' - ' fi' - ' env_variables=$(oc set env $(oc get pods -n openstack-operators -o name -l openstack.org/operator-name=watcher) -n openstack-operators --list)' - ' counter=0' - ' for i in ${env_variables}; do' - ' if echo ${i} | grep ''_URL_DEFAULT'' &> /dev/null; then' - ' echo ${i}' - ' counter=$((counter + 1))' - ' fi' - ' done' - ' if [ ${counter} -lt 3 ]; then' - ' echo "Error: Less than 3 _URL_DEFAULT variables found."' - ' exit 1' - ' else' - ' echo "Success: ${counter} _URL_DEFAULT variables found."' - ' fi' - ' ]' - ' logger.go:42: 07:08:58 | watcher-api-scaling/1-deploy-with-defaults | + oc exec -n watcher-kuttl-default openstackclient -- openstack service list -f value -c Name -c Type' - ' logger.go:42: 07:08:58 | watcher-api-scaling/1-deploy-with-defaults | ++ grep -c ''^watcher''' - ' logger.go:42: 07:09:00 | watcher-api-scaling/1-deploy-with-defaults | + ''['' 1 == 1 '']''' - ' logger.go:42: 07:09:00 | watcher-api-scaling/1-deploy-with-defaults | ++ grep watcher' - ' logger.go:42: 07:09:00 | watcher-api-scaling/1-deploy-with-defaults | ++ awk ''{print $1}''' - ' logger.go:42: 07:09:00 | watcher-api-scaling/1-deploy-with-defaults | ++ oc exec -n watcher-kuttl-default openstackclient -- openstack service list -f value -c Name -c Type -c ID' - ' logger.go:42: 07:09:02 | watcher-api-scaling/1-deploy-with-defaults | + SERVICEID=7bb23bb8113c4d3cac4445d032b0decb' - ' logger.go:42: 07:09:02 | watcher-api-scaling/1-deploy-with-defaults | ++ oc get -n watcher-kuttl-default keystoneservice watcher -o ''jsonpath={.status.serviceID}''' - ' logger.go:42: 07:09:03 | watcher-api-scaling/1-deploy-with-defaults | + ''['' 7bb23bb8113c4d3cac4445d032b0decb == 7bb23bb8113c4d3cac4445d032b0decb '']''' - ' logger.go:42: 07:09:03 | watcher-api-scaling/1-deploy-with-defaults | ++ oc get -n watcher-kuttl-default watcher watcher-kuttl -o ''jsonpath={.status.hash.dbsync}''' - ' logger.go:42: 07:09:03 | watcher-api-scaling/1-deploy-with-defaults | + ''['' -n nbbh56bh4h647h5fbh67dh699h5bdh85hb7h65fh676h555h9bh5c4hf9h684h55dh674h564h565h6fh578hd9h56bh5fchcch687h68fh568h688h69q '']''' - ' logger.go:42: 07:09:03 | watcher-api-scaling/1-deploy-with-defaults | ++ oc get pods -n openstack-operators -o name -l openstack.org/operator-name=watcher' - ' logger.go:42: 07:09:03 | watcher-api-scaling/1-deploy-with-defaults | + ''['' '''' == '''' '']''' - ' logger.go:42: 07:09:03 | watcher-api-scaling/1-deploy-with-defaults | + exit 0' - ' logger.go:42: 07:09:03 | watcher-api-scaling/1-deploy-with-defaults | test step completed 1-deploy-with-defaults' - ' logger.go:42: 07:09:03 | watcher-api-scaling/2-scale-up-watcher-api | starting test step 2-scale-up-watcher-api' - ' logger.go:42: 07:09:03 | watcher-api-scaling/2-scale-up-watcher-api | running command: [sh -c oc patch watcher -n $NAMESPACE watcher-kuttl --type=''json'' -p=''[{"op": "replace", "path": "/spec/apiServiceTemplate/replicas", "value":3}]''' - ' ]' - ' logger.go:42: 07:09:03 | watcher-api-scaling/2-scale-up-watcher-api | watcher.watcher.openstack.org/watcher-kuttl patched' - ' logger.go:42: 07:09:14 | watcher-api-scaling/2-scale-up-watcher-api | test step completed 2-scale-up-watcher-api' - ' logger.go:42: 07:09:14 | watcher-api-scaling/3-scale-down-watcher-api | starting test step 3-scale-down-watcher-api' - ' logger.go:42: 07:09:14 | watcher-api-scaling/3-scale-down-watcher-api | running command: [sh -c oc patch watcher -n $NAMESPACE watcher-kuttl --type=''json'' -p=''[{"op": "replace", "path": "/spec/apiServiceTemplate/replicas", "value":1}]''' - ' ]' - ' logger.go:42: 07:09:15 | watcher-api-scaling/3-scale-down-watcher-api | watcher.watcher.openstack.org/watcher-kuttl patched' - ' logger.go:42: 07:09:19 | watcher-api-scaling/3-scale-down-watcher-api | test step completed 3-scale-down-watcher-api' - ' logger.go:42: 07:09:19 | watcher-api-scaling/4-scale-down-zero-watcher-api | starting test step 4-scale-down-zero-watcher-api' - ' logger.go:42: 07:09:19 | watcher-api-scaling/4-scale-down-zero-watcher-api | running command: [sh -c oc patch watcher -n $NAMESPACE watcher-kuttl --type=''json'' -p=''[{"op": "replace", "path": "/spec/apiServiceTemplate/replicas", "value":0}]''' - ' ]' - ' logger.go:42: 07:09:19 | watcher-api-scaling/4-scale-down-zero-watcher-api | watcher.watcher.openstack.org/watcher-kuttl patched' - ' logger.go:42: 07:09:20 | watcher-api-scaling/4-scale-down-zero-watcher-api | test step completed 4-scale-down-zero-watcher-api' - ' logger.go:42: 07:09:20 | watcher-api-scaling/5-cleanup-watcher | starting test step 5-cleanup-watcher' - ' logger.go:42: 07:09:20 | watcher-api-scaling/5-cleanup-watcher | running command: [sh -c set -ex' - ' oc exec -n watcher-kuttl-default openstackclient -- openstack service list -f value -c Name -c Type | [ $(grep -c ^watcher) == 0 ]' - ' ]' - ' logger.go:42: 07:09:20 | watcher-api-scaling/5-cleanup-watcher | + oc exec -n watcher-kuttl-default openstackclient -- openstack service list -f value -c Name -c Type' - ' logger.go:42: 07:09:20 | watcher-api-scaling/5-cleanup-watcher | ++ grep -c ''^watcher''' - ' logger.go:42: 07:09:23 | watcher-api-scaling/5-cleanup-watcher | + ''['' 1 == 0 '']''' - ' logger.go:42: 07:09:24 | watcher-api-scaling/5-cleanup-watcher | running command: [sh -c set -ex' - ' oc exec -n watcher-kuttl-default openstackclient -- openstack service list -f value -c Name -c Type | [ $(grep -c ^watcher) == 0 ]' - ' ]' - ' logger.go:42: 07:09:24 | watcher-api-scaling/5-cleanup-watcher | + oc exec -n watcher-kuttl-default openstackclient -- openstack service list -f value -c Name -c Type' - ' logger.go:42: 07:09:24 | watcher-api-scaling/5-cleanup-watcher | ++ grep -c ''^watcher''' - ' logger.go:42: 07:09:27 | watcher-api-scaling/5-cleanup-watcher | + ''['' 0 == 0 '']''' - ' logger.go:42: 07:09:27 | watcher-api-scaling/5-cleanup-watcher | test step completed 5-cleanup-watcher' - ' logger.go:42: 07:09:27 | watcher-api-scaling | skipping kubernetes event logging' - === CONT kuttl/harness/watcher-cinder - ' logger.go:42: 07:09:27 | watcher-cinder | Skipping creation of user-supplied namespace: watcher-kuttl-default' - ' logger.go:42: 07:09:27 | watcher-cinder/0-cleanup-watcher | starting test step 0-cleanup-watcher' - ' logger.go:42: 07:09:27 | watcher-cinder/0-cleanup-watcher | test step completed 0-cleanup-watcher' - ' logger.go:42: 07:09:27 | watcher-cinder/1-deploy-watcher-no-cinder | starting test step 1-deploy-watcher-no-cinder' - ' logger.go:42: 07:09:27 | watcher-cinder/1-deploy-watcher-no-cinder | Watcher:watcher-kuttl-default/watcher-kuttl created' - ' logger.go:42: 07:09:27 | watcher-cinder/1-deploy-watcher-no-cinder | running command: [sh -c set -euxo pipefail' - ' # check that the decision engine correctly detects that there is no cinder service' - ' [ "$(oc logs -n $NAMESPACE watcher-kuttl-decision-engine-0 | grep -c ''Block storage service is not enabled, skipping storage collector'')" == 2 ]' - ' ]' - ' logger.go:42: 07:09:27 | watcher-cinder/1-deploy-watcher-no-cinder | ++ oc logs -n watcher-kuttl-default watcher-kuttl-decision-engine-0' - ' logger.go:42: 07:09:27 | watcher-cinder/1-deploy-watcher-no-cinder | ++ grep -c ''Block storage service is not enabled, skipping storage collector''' - ' logger.go:42: 07:09:27 | watcher-cinder/1-deploy-watcher-no-cinder | error: error from server (NotFound): pods "watcher-kuttl-decision-engine-0" not found in namespace "watcher-kuttl-default"' - ' logger.go:42: 07:09:27 | watcher-cinder/1-deploy-watcher-no-cinder | + ''['' 0 == 2 '']''' - ' logger.go:42: 07:09:28 | watcher-cinder/1-deploy-watcher-no-cinder | running command: [sh -c set -euxo pipefail' - ' # check that the decision engine correctly detects that there is no cinder service' - ' [ "$(oc logs -n $NAMESPACE watcher-kuttl-decision-engine-0 | grep -c ''Block storage service is not enabled, skipping storage collector'')" == 2 ]' - ' ]' - ' logger.go:42: 07:09:28 | watcher-cinder/1-deploy-watcher-no-cinder | ++ oc logs -n watcher-kuttl-default watcher-kuttl-decision-engine-0' - ' logger.go:42: 07:09:28 | watcher-cinder/1-deploy-watcher-no-cinder | ++ grep -c ''Block storage service is not enabled, skipping storage collector''' - ' logger.go:42: 07:09:28 | watcher-cinder/1-deploy-watcher-no-cinder | error: error from server (NotFound): pods "watcher-kuttl-decision-engine-0" not found in namespace "watcher-kuttl-default"' - ' logger.go:42: 07:09:28 | watcher-cinder/1-deploy-watcher-no-cinder | + ''['' 0 == 2 '']''' - ' logger.go:42: 07:09:29 | watcher-cinder/1-deploy-watcher-no-cinder | running command: [sh -c set -euxo pipefail' - ' # check that the decision engine correctly detects that there is no cinder service' - ' [ "$(oc logs -n $NAMESPACE watcher-kuttl-decision-engine-0 | grep -c ''Block storage service is not enabled, skipping storage collector'')" == 2 ]' - ' ]' - ' logger.go:42: 07:09:29 | watcher-cinder/1-deploy-watcher-no-cinder | ++ oc logs -n watcher-kuttl-default watcher-kuttl-decision-engine-0' - ' logger.go:42: 07:09:29 | watcher-cinder/1-deploy-watcher-no-cinder | ++ grep -c ''Block storage service is not enabled, skipping storage collector''' - ' logger.go:42: 07:09:29 | watcher-cinder/1-deploy-watcher-no-cinder | error: error from server (NotFound): pods "watcher-kuttl-decision-engine-0" not found in namespace "watcher-kuttl-default"' - ' logger.go:42: 07:09:29 | watcher-cinder/1-deploy-watcher-no-cinder | + ''['' 0 == 2 '']''' - ' logger.go:42: 07:09:30 | watcher-cinder/1-deploy-watcher-no-cinder | running command: [sh -c set -euxo pipefail' - ' # check that the decision engine correctly detects that there is no cinder service' - ' [ "$(oc logs -n $NAMESPACE watcher-kuttl-decision-engine-0 | grep -c ''Block storage service is not enabled, skipping storage collector'')" == 2 ]' - ' ]' - ' logger.go:42: 07:09:30 | watcher-cinder/1-deploy-watcher-no-cinder | ++ oc logs -n watcher-kuttl-default watcher-kuttl-decision-engine-0' - ' logger.go:42: 07:09:30 | watcher-cinder/1-deploy-watcher-no-cinder | ++ grep -c ''Block storage service is not enabled, skipping storage collector''' - ' logger.go:42: 07:09:31 | watcher-cinder/1-deploy-watcher-no-cinder | error: error from server (NotFound): pods "watcher-kuttl-decision-engine-0" not found in namespace "watcher-kuttl-default"' - ' logger.go:42: 07:09:31 | watcher-cinder/1-deploy-watcher-no-cinder | + ''['' 0 == 2 '']''' - ' logger.go:42: 07:09:32 | watcher-cinder/1-deploy-watcher-no-cinder | running command: [sh -c set -euxo pipefail' - ' # check that the decision engine correctly detects that there is no cinder service' - ' [ "$(oc logs -n $NAMESPACE watcher-kuttl-decision-engine-0 | grep -c ''Block storage service is not enabled, skipping storage collector'')" == 2 ]' - ' ]' - ' logger.go:42: 07:09:32 | watcher-cinder/1-deploy-watcher-no-cinder | ++ oc logs -n watcher-kuttl-default watcher-kuttl-decision-engine-0' - ' logger.go:42: 07:09:32 | watcher-cinder/1-deploy-watcher-no-cinder | ++ grep -c ''Block storage service is not enabled, skipping storage collector''' - ' logger.go:42: 07:09:32 | watcher-cinder/1-deploy-watcher-no-cinder | error: error from server (NotFound): pods "watcher-kuttl-decision-engine-0" not found in namespace "watcher-kuttl-default"' - ' logger.go:42: 07:09:32 | watcher-cinder/1-deploy-watcher-no-cinder | + ''['' 0 == 2 '']''' - ' logger.go:42: 07:09:33 | watcher-cinder/1-deploy-watcher-no-cinder | running command: [sh -c set -euxo pipefail' - ' # check that the decision engine correctly detects that there is no cinder service' - ' [ "$(oc logs -n $NAMESPACE watcher-kuttl-decision-engine-0 | grep -c ''Block storage service is not enabled, skipping storage collector'')" == 2 ]' - ' ]' - ' logger.go:42: 07:09:33 | watcher-cinder/1-deploy-watcher-no-cinder | ++ oc logs -n watcher-kuttl-default watcher-kuttl-decision-engine-0' - ' logger.go:42: 07:09:33 | watcher-cinder/1-deploy-watcher-no-cinder | ++ grep -c ''Block storage service is not enabled, skipping storage collector''' - ' logger.go:42: 07:09:33 | watcher-cinder/1-deploy-watcher-no-cinder | error: error from server (NotFound): pods "watcher-kuttl-decision-engine-0" not found in namespace "watcher-kuttl-default"' - ' logger.go:42: 07:09:33 | watcher-cinder/1-deploy-watcher-no-cinder | + ''['' 0 == 2 '']''' - ' logger.go:42: 07:09:34 | watcher-cinder/1-deploy-watcher-no-cinder | running command: [sh -c set -euxo pipefail' - ' # check that the decision engine correctly detects that there is no cinder service' - ' [ "$(oc logs -n $NAMESPACE watcher-kuttl-decision-engine-0 | grep -c ''Block storage service is not enabled, skipping storage collector'')" == 2 ]' - ' ]' - ' logger.go:42: 07:09:34 | watcher-cinder/1-deploy-watcher-no-cinder | ++ oc logs -n watcher-kuttl-default watcher-kuttl-decision-engine-0' - ' logger.go:42: 07:09:34 | watcher-cinder/1-deploy-watcher-no-cinder | ++ grep -c ''Block storage service is not enabled, skipping storage collector''' - ' logger.go:42: 07:09:34 | watcher-cinder/1-deploy-watcher-no-cinder | error: error from server (NotFound): pods "watcher-kuttl-decision-engine-0" not found in namespace "watcher-kuttl-default"' - ' logger.go:42: 07:09:34 | watcher-cinder/1-deploy-watcher-no-cinder | + ''['' 0 == 2 '']''' - ' logger.go:42: 07:09:35 | watcher-cinder/1-deploy-watcher-no-cinder | running command: [sh -c set -euxo pipefail' - ' # check that the decision engine correctly detects that there is no cinder service' - ' [ "$(oc logs -n $NAMESPACE watcher-kuttl-decision-engine-0 | grep -c ''Block storage service is not enabled, skipping storage collector'')" == 2 ]' - ' ]' - ' logger.go:42: 07:09:35 | watcher-cinder/1-deploy-watcher-no-cinder | ++ oc logs -n watcher-kuttl-default watcher-kuttl-decision-engine-0' - ' logger.go:42: 07:09:35 | watcher-cinder/1-deploy-watcher-no-cinder | ++ grep -c ''Block storage service is not enabled, skipping storage collector''' - ' logger.go:42: 07:09:35 | watcher-cinder/1-deploy-watcher-no-cinder | error: error from server (NotFound): pods "watcher-kuttl-decision-engine-0" not found in namespace "watcher-kuttl-default"' - ' logger.go:42: 07:09:35 | watcher-cinder/1-deploy-watcher-no-cinder | + ''['' 0 == 2 '']''' - ' logger.go:42: 07:09:36 | watcher-cinder/1-deploy-watcher-no-cinder | running command: [sh -c set -euxo pipefail' - ' # check that the decision engine correctly detects that there is no cinder service' - ' [ "$(oc logs -n $NAMESPACE watcher-kuttl-decision-engine-0 | grep -c ''Block storage service is not enabled, skipping storage collector'')" == 2 ]' - ' ]' - ' logger.go:42: 07:09:36 | watcher-cinder/1-deploy-watcher-no-cinder | ++ grep -c ''Block storage service is not enabled, skipping storage collector''' - ' logger.go:42: 07:09:36 | watcher-cinder/1-deploy-watcher-no-cinder | ++ oc logs -n watcher-kuttl-default watcher-kuttl-decision-engine-0' - ' logger.go:42: 07:09:36 | watcher-cinder/1-deploy-watcher-no-cinder | error: error from server (NotFound): pods "watcher-kuttl-decision-engine-0" not found in namespace "watcher-kuttl-default"' - ' logger.go:42: 07:09:36 | watcher-cinder/1-deploy-watcher-no-cinder | + ''['' 0 == 2 '']''' - ' logger.go:42: 07:09:37 | watcher-cinder/1-deploy-watcher-no-cinder | running command: [sh -c set -euxo pipefail' - ' # check that the decision engine correctly detects that there is no cinder service' - ' [ "$(oc logs -n $NAMESPACE watcher-kuttl-decision-engine-0 | grep -c ''Block storage service is not enabled, skipping storage collector'')" == 2 ]' - ' ]' - ' logger.go:42: 07:09:37 | watcher-cinder/1-deploy-watcher-no-cinder | ++ oc logs -n watcher-kuttl-default watcher-kuttl-decision-engine-0' - ' logger.go:42: 07:09:37 | watcher-cinder/1-deploy-watcher-no-cinder | ++ grep -c ''Block storage service is not enabled, skipping storage collector''' - ' logger.go:42: 07:09:38 | watcher-cinder/1-deploy-watcher-no-cinder | error: error from server (NotFound): pods "watcher-kuttl-decision-engine-0" not found in namespace "watcher-kuttl-default"' - ' logger.go:42: 07:09:38 | watcher-cinder/1-deploy-watcher-no-cinder | + ''['' 0 == 2 '']''' - ' logger.go:42: 07:09:39 | watcher-cinder/1-deploy-watcher-no-cinder | running command: [sh -c set -euxo pipefail' - ' # check that the decision engine correctly detects that there is no cinder service' - ' [ "$(oc logs -n $NAMESPACE watcher-kuttl-decision-engine-0 | grep -c ''Block storage service is not enabled, skipping storage collector'')" == 2 ]' - ' ]' - ' logger.go:42: 07:09:39 | watcher-cinder/1-deploy-watcher-no-cinder | ++ oc logs -n watcher-kuttl-default watcher-kuttl-decision-engine-0' - ' logger.go:42: 07:09:39 | watcher-cinder/1-deploy-watcher-no-cinder | ++ grep -c ''Block storage service is not enabled, skipping storage collector''' - ' logger.go:42: 07:09:39 | watcher-cinder/1-deploy-watcher-no-cinder | error: error from server (NotFound): pods "watcher-kuttl-decision-engine-0" not found in namespace "watcher-kuttl-default"' - ' logger.go:42: 07:09:39 | watcher-cinder/1-deploy-watcher-no-cinder | + ''['' 0 == 2 '']''' - ' logger.go:42: 07:09:40 | watcher-cinder/1-deploy-watcher-no-cinder | running command: [sh -c set -euxo pipefail' - ' # check that the decision engine correctly detects that there is no cinder service' - ' [ "$(oc logs -n $NAMESPACE watcher-kuttl-decision-engine-0 | grep -c ''Block storage service is not enabled, skipping storage collector'')" == 2 ]' - ' ]' - ' logger.go:42: 07:09:40 | watcher-cinder/1-deploy-watcher-no-cinder | ++ oc logs -n watcher-kuttl-default watcher-kuttl-decision-engine-0' - ' logger.go:42: 07:09:40 | watcher-cinder/1-deploy-watcher-no-cinder | ++ grep -c ''Block storage service is not enabled, skipping storage collector''' - ' logger.go:42: 07:09:40 | watcher-cinder/1-deploy-watcher-no-cinder | Error from server (BadRequest): container "watcher-decision-engine" in pod "watcher-kuttl-decision-engine-0" is waiting to start: ContainerCreating' - ' logger.go:42: 07:09:40 | watcher-cinder/1-deploy-watcher-no-cinder | + ''['' 0 == 2 '']''' - ' logger.go:42: 07:09:41 | watcher-cinder/1-deploy-watcher-no-cinder | running command: [sh -c set -euxo pipefail' - ' # check that the decision engine correctly detects that there is no cinder service' - ' [ "$(oc logs -n $NAMESPACE watcher-kuttl-decision-engine-0 | grep -c ''Block storage service is not enabled, skipping storage collector'')" == 2 ]' - ' ]' - ' logger.go:42: 07:09:41 | watcher-cinder/1-deploy-watcher-no-cinder | ++ oc logs -n watcher-kuttl-default watcher-kuttl-decision-engine-0' - ' logger.go:42: 07:09:41 | watcher-cinder/1-deploy-watcher-no-cinder | ++ grep -c ''Block storage service is not enabled, skipping storage collector''' - ' logger.go:42: 07:09:41 | watcher-cinder/1-deploy-watcher-no-cinder | + ''['' 0 == 2 '']''' - ' logger.go:42: 07:09:42 | watcher-cinder/1-deploy-watcher-no-cinder | running command: [sh -c set -euxo pipefail' - ' # check that the decision engine correctly detects that there is no cinder service' - ' [ "$(oc logs -n $NAMESPACE watcher-kuttl-decision-engine-0 | grep -c ''Block storage service is not enabled, skipping storage collector'')" == 2 ]' - ' ]' - ' logger.go:42: 07:09:42 | watcher-cinder/1-deploy-watcher-no-cinder | ++ oc logs -n watcher-kuttl-default watcher-kuttl-decision-engine-0' - ' logger.go:42: 07:09:42 | watcher-cinder/1-deploy-watcher-no-cinder | ++ grep -c ''Block storage service is not enabled, skipping storage collector''' - ' logger.go:42: 07:09:42 | watcher-cinder/1-deploy-watcher-no-cinder | + ''['' 0 == 2 '']''' - ' logger.go:42: 07:09:43 | watcher-cinder/1-deploy-watcher-no-cinder | running command: [sh -c set -euxo pipefail' - ' # check that the decision engine correctly detects that there is no cinder service' - ' [ "$(oc logs -n $NAMESPACE watcher-kuttl-decision-engine-0 | grep -c ''Block storage service is not enabled, skipping storage collector'')" == 2 ]' - ' ]' - ' logger.go:42: 07:09:43 | watcher-cinder/1-deploy-watcher-no-cinder | ++ oc logs -n watcher-kuttl-default watcher-kuttl-decision-engine-0' - ' logger.go:42: 07:09:43 | watcher-cinder/1-deploy-watcher-no-cinder | ++ grep -c ''Block storage service is not enabled, skipping storage collector''' - ' logger.go:42: 07:09:44 | watcher-cinder/1-deploy-watcher-no-cinder | + ''['' 1 == 2 '']''' - ' logger.go:42: 07:09:45 | watcher-cinder/1-deploy-watcher-no-cinder | running command: [sh -c set -euxo pipefail' - ' # check that the decision engine correctly detects that there is no cinder service' - ' [ "$(oc logs -n $NAMESPACE watcher-kuttl-decision-engine-0 | grep -c ''Block storage service is not enabled, skipping storage collector'')" == 2 ]' - ' ]' - ' logger.go:42: 07:09:45 | watcher-cinder/1-deploy-watcher-no-cinder | ++ oc logs -n watcher-kuttl-default watcher-kuttl-decision-engine-0' - ' logger.go:42: 07:09:45 | watcher-cinder/1-deploy-watcher-no-cinder | ++ grep -c ''Block storage service is not enabled, skipping storage collector''' - ' logger.go:42: 07:09:45 | watcher-cinder/1-deploy-watcher-no-cinder | + ''['' 2 == 2 '']''' - ' logger.go:42: 07:09:46 | watcher-cinder/1-deploy-watcher-no-cinder | running command: [sh -c set -euxo pipefail' - ' # check that the decision engine correctly detects that there is no cinder service' - ' [ "$(oc logs -n $NAMESPACE watcher-kuttl-decision-engine-0 | grep -c ''Block storage service is not enabled, skipping storage collector'')" == 2 ]' - ' ]' - ' logger.go:42: 07:09:46 | watcher-cinder/1-deploy-watcher-no-cinder | ++ oc logs -n watcher-kuttl-default watcher-kuttl-decision-engine-0' - ' logger.go:42: 07:09:46 | watcher-cinder/1-deploy-watcher-no-cinder | ++ grep -c ''Block storage service is not enabled, skipping storage collector''' - ' logger.go:42: 07:09:46 | watcher-cinder/1-deploy-watcher-no-cinder | + ''['' 2 == 2 '']''' - ' logger.go:42: 07:09:47 | watcher-cinder/1-deploy-watcher-no-cinder | running command: [sh -c set -euxo pipefail' - ' # check that the decision engine correctly detects that there is no cinder service' - ' [ "$(oc logs -n $NAMESPACE watcher-kuttl-decision-engine-0 | grep -c ''Block storage service is not enabled, skipping storage collector'')" == 2 ]' - ' ]' - ' logger.go:42: 07:09:47 | watcher-cinder/1-deploy-watcher-no-cinder | ++ oc logs -n watcher-kuttl-default watcher-kuttl-decision-engine-0' - ' logger.go:42: 07:09:47 | watcher-cinder/1-deploy-watcher-no-cinder | ++ grep -c ''Block storage service is not enabled, skipping storage collector''' - ' logger.go:42: 07:09:47 | watcher-cinder/1-deploy-watcher-no-cinder | + ''['' 2 == 2 '']''' - ' logger.go:42: 07:09:48 | watcher-cinder/1-deploy-watcher-no-cinder | running command: [sh -c set -euxo pipefail' - ' # check that the decision engine correctly detects that there is no cinder service' - ' [ "$(oc logs -n $NAMESPACE watcher-kuttl-decision-engine-0 | grep -c ''Block storage service is not enabled, skipping storage collector'')" == 2 ]' - ' ]' - ' logger.go:42: 07:09:48 | watcher-cinder/1-deploy-watcher-no-cinder | ++ oc logs -n watcher-kuttl-default watcher-kuttl-decision-engine-0' - ' logger.go:42: 07:09:48 | watcher-cinder/1-deploy-watcher-no-cinder | ++ grep -c ''Block storage service is not enabled, skipping storage collector''' - ' logger.go:42: 07:09:48 | watcher-cinder/1-deploy-watcher-no-cinder | + ''['' 2 == 2 '']''' - ' logger.go:42: 07:09:49 | watcher-cinder/1-deploy-watcher-no-cinder | running command: [sh -c set -euxo pipefail' - ' # check that the decision engine correctly detects that there is no cinder service' - ' [ "$(oc logs -n $NAMESPACE watcher-kuttl-decision-engine-0 | grep -c ''Block storage service is not enabled, skipping storage collector'')" == 2 ]' - ' ]' - ' logger.go:42: 07:09:50 | watcher-cinder/1-deploy-watcher-no-cinder | ++ oc logs -n watcher-kuttl-default watcher-kuttl-decision-engine-0' - ' logger.go:42: 07:09:50 | watcher-cinder/1-deploy-watcher-no-cinder | ++ grep -c ''Block storage service is not enabled, skipping storage collector''' - ' logger.go:42: 07:09:50 | watcher-cinder/1-deploy-watcher-no-cinder | + ''['' 2 == 2 '']''' - ' logger.go:42: 07:09:51 | watcher-cinder/1-deploy-watcher-no-cinder | running command: [sh -c set -euxo pipefail' - ' # check that the decision engine correctly detects that there is no cinder service' - ' [ "$(oc logs -n $NAMESPACE watcher-kuttl-decision-engine-0 | grep -c ''Block storage service is not enabled, skipping storage collector'')" == 2 ]' - ' ]' - ' logger.go:42: 07:09:51 | watcher-cinder/1-deploy-watcher-no-cinder | ++ oc logs -n watcher-kuttl-default watcher-kuttl-decision-engine-0' - ' logger.go:42: 07:09:51 | watcher-cinder/1-deploy-watcher-no-cinder | ++ grep -c ''Block storage service is not enabled, skipping storage collector''' - ' logger.go:42: 07:09:51 | watcher-cinder/1-deploy-watcher-no-cinder | + ''['' 2 == 2 '']''' - ' logger.go:42: 07:09:51 | watcher-cinder/1-deploy-watcher-no-cinder | test step completed 1-deploy-watcher-no-cinder' - ' logger.go:42: 07:09:51 | watcher-cinder/2-deploy-cinder | starting test step 2-deploy-cinder' - ' logger.go:42: 07:09:51 | watcher-cinder/2-deploy-cinder | OpenStackControlPlane:watcher-kuttl-default/openstack updated' - ' logger.go:42: 07:09:51 | watcher-cinder/2-deploy-cinder | running command: [sh -c set -euxo pipefail' - ' # check that the decision detects that there is a cinder service and' - ' # does not log that storage collector is skipped' - ' [ "$(oc logs -n $NAMESPACE watcher-kuttl-decision-engine-0 |grep -c ''Block storage service is not enabled, skipping storage collector'')" == 0 ]' - ' ]' - ' logger.go:42: 07:09:51 | watcher-cinder/2-deploy-cinder | ++ oc logs -n watcher-kuttl-default watcher-kuttl-decision-engine-0' - ' logger.go:42: 07:09:51 | watcher-cinder/2-deploy-cinder | ++ grep -c ''Block storage service is not enabled, skipping storage collector''' - ' logger.go:42: 07:09:51 | watcher-cinder/2-deploy-cinder | + ''['' 2 == 0 '']''' - ' logger.go:42: 07:09:52 | watcher-cinder/2-deploy-cinder | running command: [sh -c set -euxo pipefail' - ' # check that the decision detects that there is a cinder service and' - ' # does not log that storage collector is skipped' - ' [ "$(oc logs -n $NAMESPACE watcher-kuttl-decision-engine-0 |grep -c ''Block storage service is not enabled, skipping storage collector'')" == 0 ]' - ' ]' - ' logger.go:42: 07:09:52 | watcher-cinder/2-deploy-cinder | ++ oc logs -n watcher-kuttl-default watcher-kuttl-decision-engine-0' - ' logger.go:42: 07:09:52 | watcher-cinder/2-deploy-cinder | ++ grep -c ''Block storage service is not enabled, skipping storage collector''' - ' logger.go:42: 07:09:52 | watcher-cinder/2-deploy-cinder | + ''['' 2 == 0 '']''' - ' logger.go:42: 07:09:53 | watcher-cinder/2-deploy-cinder | running command: [sh -c set -euxo pipefail' - ' # check that the decision detects that there is a cinder service and' - ' # does not log that storage collector is skipped' - ' [ "$(oc logs -n $NAMESPACE watcher-kuttl-decision-engine-0 |grep -c ''Block storage service is not enabled, skipping storage collector'')" == 0 ]' - ' ]' - ' logger.go:42: 07:09:53 | watcher-cinder/2-deploy-cinder | ++ oc logs -n watcher-kuttl-default watcher-kuttl-decision-engine-0' - ' logger.go:42: 07:09:53 | watcher-cinder/2-deploy-cinder | ++ grep -c ''Block storage service is not enabled, skipping storage collector''' - ' logger.go:42: 07:09:54 | watcher-cinder/2-deploy-cinder | + ''['' 2 == 0 '']''' - ' logger.go:42: 07:09:55 | watcher-cinder/2-deploy-cinder | running command: [sh -c set -euxo pipefail' - ' # check that the decision detects that there is a cinder service and' - ' # does not log that storage collector is skipped' - ' [ "$(oc logs -n $NAMESPACE watcher-kuttl-decision-engine-0 |grep -c ''Block storage service is not enabled, skipping storage collector'')" == 0 ]' - ' ]' - ' logger.go:42: 07:09:55 | watcher-cinder/2-deploy-cinder | ++ oc logs -n watcher-kuttl-default watcher-kuttl-decision-engine-0' - ' logger.go:42: 07:09:55 | watcher-cinder/2-deploy-cinder | ++ grep -c ''Block storage service is not enabled, skipping storage collector''' - ' logger.go:42: 07:09:55 | watcher-cinder/2-deploy-cinder | + ''['' 2 == 0 '']''' - ' logger.go:42: 07:09:56 | watcher-cinder/2-deploy-cinder | running command: [sh -c set -euxo pipefail' - ' # check that the decision detects that there is a cinder service and' - ' # does not log that storage collector is skipped' - ' [ "$(oc logs -n $NAMESPACE watcher-kuttl-decision-engine-0 |grep -c ''Block storage service is not enabled, skipping storage collector'')" == 0 ]' - ' ]' - ' logger.go:42: 07:09:56 | watcher-cinder/2-deploy-cinder | ++ oc logs -n watcher-kuttl-default watcher-kuttl-decision-engine-0' - ' logger.go:42: 07:09:56 | watcher-cinder/2-deploy-cinder | ++ grep -c ''Block storage service is not enabled, skipping storage collector''' - ' logger.go:42: 07:09:56 | watcher-cinder/2-deploy-cinder | + ''['' 2 == 0 '']''' - ' logger.go:42: 07:09:57 | watcher-cinder/2-deploy-cinder | running command: [sh -c set -euxo pipefail' - ' # check that the decision detects that there is a cinder service and' - ' # does not log that storage collector is skipped' - ' [ "$(oc logs -n $NAMESPACE watcher-kuttl-decision-engine-0 |grep -c ''Block storage service is not enabled, skipping storage collector'')" == 0 ]' - ' ]' - ' logger.go:42: 07:09:57 | watcher-cinder/2-deploy-cinder | ++ oc logs -n watcher-kuttl-default watcher-kuttl-decision-engine-0' - ' logger.go:42: 07:09:57 | watcher-cinder/2-deploy-cinder | ++ grep -c ''Block storage service is not enabled, skipping storage collector''' - ' logger.go:42: 07:09:57 | watcher-cinder/2-deploy-cinder | + ''['' 2 == 0 '']''' - ' logger.go:42: 07:09:58 | watcher-cinder/2-deploy-cinder | running command: [sh -c set -euxo pipefail' - ' # check that the decision detects that there is a cinder service and' - ' # does not log that storage collector is skipped' - ' [ "$(oc logs -n $NAMESPACE watcher-kuttl-decision-engine-0 |grep -c ''Block storage service is not enabled, skipping storage collector'')" == 0 ]' - ' ]' - ' logger.go:42: 07:09:58 | watcher-cinder/2-deploy-cinder | ++ oc logs -n watcher-kuttl-default watcher-kuttl-decision-engine-0' - ' logger.go:42: 07:09:58 | watcher-cinder/2-deploy-cinder | ++ grep -c ''Block storage service is not enabled, skipping storage collector''' - ' logger.go:42: 07:09:59 | watcher-cinder/2-deploy-cinder | + ''['' 2 == 0 '']''' - ' logger.go:42: 07:10:00 | watcher-cinder/2-deploy-cinder | running command: [sh -c set -euxo pipefail' - ' # check that the decision detects that there is a cinder service and' - ' # does not log that storage collector is skipped' - ' [ "$(oc logs -n $NAMESPACE watcher-kuttl-decision-engine-0 |grep -c ''Block storage service is not enabled, skipping storage collector'')" == 0 ]' - ' ]' - ' logger.go:42: 07:10:00 | watcher-cinder/2-deploy-cinder | ++ oc logs -n watcher-kuttl-default watcher-kuttl-decision-engine-0' - ' logger.go:42: 07:10:00 | watcher-cinder/2-deploy-cinder | ++ grep -c ''Block storage service is not enabled, skipping storage collector''' - ' logger.go:42: 07:10:00 | watcher-cinder/2-deploy-cinder | + ''['' 2 == 0 '']''' - ' logger.go:42: 07:10:01 | watcher-cinder/2-deploy-cinder | running command: [sh -c set -euxo pipefail' - ' # check that the decision detects that there is a cinder service and' - ' # does not log that storage collector is skipped' - ' [ "$(oc logs -n $NAMESPACE watcher-kuttl-decision-engine-0 |grep -c ''Block storage service is not enabled, skipping storage collector'')" == 0 ]' - ' ]' - ' logger.go:42: 07:10:01 | watcher-cinder/2-deploy-cinder | ++ oc logs -n watcher-kuttl-default watcher-kuttl-decision-engine-0' - ' logger.go:42: 07:10:01 | watcher-cinder/2-deploy-cinder | ++ grep -c ''Block storage service is not enabled, skipping storage collector''' - ' logger.go:42: 07:10:01 | watcher-cinder/2-deploy-cinder | + ''['' 2 == 0 '']''' - ' logger.go:42: 07:10:02 | watcher-cinder/2-deploy-cinder | running command: [sh -c set -euxo pipefail' - ' # check that the decision detects that there is a cinder service and' - ' # does not log that storage collector is skipped' - ' [ "$(oc logs -n $NAMESPACE watcher-kuttl-decision-engine-0 |grep -c ''Block storage service is not enabled, skipping storage collector'')" == 0 ]' - ' ]' - ' logger.go:42: 07:10:02 | watcher-cinder/2-deploy-cinder | ++ oc logs -n watcher-kuttl-default watcher-kuttl-decision-engine-0' - ' logger.go:42: 07:10:02 | watcher-cinder/2-deploy-cinder | ++ grep -c ''Block storage service is not enabled, skipping storage collector''' - ' logger.go:42: 07:10:02 | watcher-cinder/2-deploy-cinder | + ''['' 2 == 0 '']''' - ' logger.go:42: 07:10:03 | watcher-cinder/2-deploy-cinder | running command: [sh -c set -euxo pipefail' - ' # check that the decision detects that there is a cinder service and' - ' # does not log that storage collector is skipped' - ' [ "$(oc logs -n $NAMESPACE watcher-kuttl-decision-engine-0 |grep -c ''Block storage service is not enabled, skipping storage collector'')" == 0 ]' - ' ]' - ' logger.go:42: 07:10:03 | watcher-cinder/2-deploy-cinder | ++ oc logs -n watcher-kuttl-default watcher-kuttl-decision-engine-0' - ' logger.go:42: 07:10:03 | watcher-cinder/2-deploy-cinder | ++ grep -c ''Block storage service is not enabled, skipping storage collector''' - ' logger.go:42: 07:10:04 | watcher-cinder/2-deploy-cinder | + ''['' 2 == 0 '']''' - ' logger.go:42: 07:10:05 | watcher-cinder/2-deploy-cinder | running command: [sh -c set -euxo pipefail' - ' # check that the decision detects that there is a cinder service and' - ' # does not log that storage collector is skipped' - ' [ "$(oc logs -n $NAMESPACE watcher-kuttl-decision-engine-0 |grep -c ''Block storage service is not enabled, skipping storage collector'')" == 0 ]' - ' ]' - ' logger.go:42: 07:10:05 | watcher-cinder/2-deploy-cinder | ++ oc logs -n watcher-kuttl-default watcher-kuttl-decision-engine-0' - ' logger.go:42: 07:10:05 | watcher-cinder/2-deploy-cinder | ++ grep -c ''Block storage service is not enabled, skipping storage collector''' - ' logger.go:42: 07:10:05 | watcher-cinder/2-deploy-cinder | + ''['' 2 == 0 '']''' - ' logger.go:42: 07:10:06 | watcher-cinder/2-deploy-cinder | running command: [sh -c set -euxo pipefail' - ' # check that the decision detects that there is a cinder service and' - ' # does not log that storage collector is skipped' - ' [ "$(oc logs -n $NAMESPACE watcher-kuttl-decision-engine-0 |grep -c ''Block storage service is not enabled, skipping storage collector'')" == 0 ]' - ' ]' - ' logger.go:42: 07:10:06 | watcher-cinder/2-deploy-cinder | ++ grep -c ''Block storage service is not enabled, skipping storage collector''' - ' logger.go:42: 07:10:06 | watcher-cinder/2-deploy-cinder | ++ oc logs -n watcher-kuttl-default watcher-kuttl-decision-engine-0' - ' logger.go:42: 07:10:06 | watcher-cinder/2-deploy-cinder | + ''['' 2 == 0 '']''' - ' logger.go:42: 07:10:07 | watcher-cinder/2-deploy-cinder | running command: [sh -c set -euxo pipefail' - ' # check that the decision detects that there is a cinder service and' - ' # does not log that storage collector is skipped' - ' [ "$(oc logs -n $NAMESPACE watcher-kuttl-decision-engine-0 |grep -c ''Block storage service is not enabled, skipping storage collector'')" == 0 ]' - ' ]' - ' logger.go:42: 07:10:07 | watcher-cinder/2-deploy-cinder | ++ oc logs -n watcher-kuttl-default watcher-kuttl-decision-engine-0' - ' logger.go:42: 07:10:07 | watcher-cinder/2-deploy-cinder | ++ grep -c ''Block storage service is not enabled, skipping storage collector''' - ' logger.go:42: 07:10:07 | watcher-cinder/2-deploy-cinder | + ''['' 2 == 0 '']''' - ' logger.go:42: 07:10:08 | watcher-cinder/2-deploy-cinder | running command: [sh -c set -euxo pipefail' - ' # check that the decision detects that there is a cinder service and' - ' # does not log that storage collector is skipped' - ' [ "$(oc logs -n $NAMESPACE watcher-kuttl-decision-engine-0 |grep -c ''Block storage service is not enabled, skipping storage collector'')" == 0 ]' - ' ]' - ' logger.go:42: 07:10:08 | watcher-cinder/2-deploy-cinder | ++ oc logs -n watcher-kuttl-default watcher-kuttl-decision-engine-0' - ' logger.go:42: 07:10:08 | watcher-cinder/2-deploy-cinder | ++ grep -c ''Block storage service is not enabled, skipping storage collector''' - ' logger.go:42: 07:10:08 | watcher-cinder/2-deploy-cinder | + ''['' 2 == 0 '']''' - ' logger.go:42: 07:10:09 | watcher-cinder/2-deploy-cinder | running command: [sh -c set -euxo pipefail' - ' # check that the decision detects that there is a cinder service and' - ' # does not log that storage collector is skipped' - ' [ "$(oc logs -n $NAMESPACE watcher-kuttl-decision-engine-0 |grep -c ''Block storage service is not enabled, skipping storage collector'')" == 0 ]' - ' ]' - ' logger.go:42: 07:10:09 | watcher-cinder/2-deploy-cinder | ++ oc logs -n watcher-kuttl-default watcher-kuttl-decision-engine-0' - ' logger.go:42: 07:10:09 | watcher-cinder/2-deploy-cinder | ++ grep -c ''Block storage service is not enabled, skipping storage collector''' - ' logger.go:42: 07:10:10 | watcher-cinder/2-deploy-cinder | + ''['' 2 == 0 '']''' - ' logger.go:42: 07:10:11 | watcher-cinder/2-deploy-cinder | running command: [sh -c set -euxo pipefail' - ' # check that the decision detects that there is a cinder service and' - ' # does not log that storage collector is skipped' - ' [ "$(oc logs -n $NAMESPACE watcher-kuttl-decision-engine-0 |grep -c ''Block storage service is not enabled, skipping storage collector'')" == 0 ]' - ' ]' - ' logger.go:42: 07:10:11 | watcher-cinder/2-deploy-cinder | ++ oc logs -n watcher-kuttl-default watcher-kuttl-decision-engine-0' - ' logger.go:42: 07:10:11 | watcher-cinder/2-deploy-cinder | ++ grep -c ''Block storage service is not enabled, skipping storage collector''' - ' logger.go:42: 07:10:11 | watcher-cinder/2-deploy-cinder | + ''['' 2 == 0 '']''' - ' logger.go:42: 07:10:12 | watcher-cinder/2-deploy-cinder | running command: [sh -c set -euxo pipefail' - ' # check that the decision detects that there is a cinder service and' - ' # does not log that storage collector is skipped' - ' [ "$(oc logs -n $NAMESPACE watcher-kuttl-decision-engine-0 |grep -c ''Block storage service is not enabled, skipping storage collector'')" == 0 ]' - ' ]' - ' logger.go:42: 07:10:12 | watcher-cinder/2-deploy-cinder | ++ oc logs -n watcher-kuttl-default watcher-kuttl-decision-engine-0' - ' logger.go:42: 07:10:12 | watcher-cinder/2-deploy-cinder | ++ grep -c ''Block storage service is not enabled, skipping storage collector''' - ' logger.go:42: 07:10:12 | watcher-cinder/2-deploy-cinder | + ''['' 2 == 0 '']''' - ' logger.go:42: 07:10:13 | watcher-cinder/2-deploy-cinder | running command: [sh -c set -euxo pipefail' - ' # check that the decision detects that there is a cinder service and' - ' # does not log that storage collector is skipped' - ' [ "$(oc logs -n $NAMESPACE watcher-kuttl-decision-engine-0 |grep -c ''Block storage service is not enabled, skipping storage collector'')" == 0 ]' - ' ]' - ' logger.go:42: 07:10:13 | watcher-cinder/2-deploy-cinder | ++ oc logs -n watcher-kuttl-default watcher-kuttl-decision-engine-0' - ' logger.go:42: 07:10:13 | watcher-cinder/2-deploy-cinder | ++ grep -c ''Block storage service is not enabled, skipping storage collector''' - ' logger.go:42: 07:10:13 | watcher-cinder/2-deploy-cinder | + ''['' 2 == 0 '']''' - ' logger.go:42: 07:10:14 | watcher-cinder/2-deploy-cinder | running command: [sh -c set -euxo pipefail' - ' # check that the decision detects that there is a cinder service and' - ' # does not log that storage collector is skipped' - ' [ "$(oc logs -n $NAMESPACE watcher-kuttl-decision-engine-0 |grep -c ''Block storage service is not enabled, skipping storage collector'')" == 0 ]' - ' ]' - ' logger.go:42: 07:10:14 | watcher-cinder/2-deploy-cinder | ++ oc logs -n watcher-kuttl-default watcher-kuttl-decision-engine-0' - ' logger.go:42: 07:10:14 | watcher-cinder/2-deploy-cinder | ++ grep -c ''Block storage service is not enabled, skipping storage collector''' - ' logger.go:42: 07:10:15 | watcher-cinder/2-deploy-cinder | + ''['' 2 == 0 '']''' - ' logger.go:42: 07:10:16 | watcher-cinder/2-deploy-cinder | running command: [sh -c set -euxo pipefail' - ' # check that the decision detects that there is a cinder service and' - ' # does not log that storage collector is skipped' - ' [ "$(oc logs -n $NAMESPACE watcher-kuttl-decision-engine-0 |grep -c ''Block storage service is not enabled, skipping storage collector'')" == 0 ]' - ' ]' - ' logger.go:42: 07:10:16 | watcher-cinder/2-deploy-cinder | ++ grep -c ''Block storage service is not enabled, skipping storage collector''' - ' logger.go:42: 07:10:16 | watcher-cinder/2-deploy-cinder | ++ oc logs -n watcher-kuttl-default watcher-kuttl-decision-engine-0' - ' logger.go:42: 07:10:16 | watcher-cinder/2-deploy-cinder | + ''['' 2 == 0 '']''' - ' logger.go:42: 07:10:17 | watcher-cinder/2-deploy-cinder | running command: [sh -c set -euxo pipefail' - ' # check that the decision detects that there is a cinder service and' - ' # does not log that storage collector is skipped' - ' [ "$(oc logs -n $NAMESPACE watcher-kuttl-decision-engine-0 |grep -c ''Block storage service is not enabled, skipping storage collector'')" == 0 ]' - ' ]' - ' logger.go:42: 07:10:17 | watcher-cinder/2-deploy-cinder | ++ oc logs -n watcher-kuttl-default watcher-kuttl-decision-engine-0' - ' logger.go:42: 07:10:17 | watcher-cinder/2-deploy-cinder | ++ grep -c ''Block storage service is not enabled, skipping storage collector''' - ' logger.go:42: 07:10:17 | watcher-cinder/2-deploy-cinder | + ''['' 2 == 0 '']''' - ' logger.go:42: 07:10:18 | watcher-cinder/2-deploy-cinder | running command: [sh -c set -euxo pipefail' - ' # check that the decision detects that there is a cinder service and' - ' # does not log that storage collector is skipped' - ' [ "$(oc logs -n $NAMESPACE watcher-kuttl-decision-engine-0 |grep -c ''Block storage service is not enabled, skipping storage collector'')" == 0 ]' - ' ]' - ' logger.go:42: 07:10:18 | watcher-cinder/2-deploy-cinder | ++ oc logs -n watcher-kuttl-default watcher-kuttl-decision-engine-0' - ' logger.go:42: 07:10:18 | watcher-cinder/2-deploy-cinder | ++ grep -c ''Block storage service is not enabled, skipping storage collector''' - ' logger.go:42: 07:10:18 | watcher-cinder/2-deploy-cinder | + ''['' 2 == 0 '']''' - ' logger.go:42: 07:10:19 | watcher-cinder/2-deploy-cinder | running command: [sh -c set -euxo pipefail' - ' # check that the decision detects that there is a cinder service and' - ' # does not log that storage collector is skipped' - ' [ "$(oc logs -n $NAMESPACE watcher-kuttl-decision-engine-0 |grep -c ''Block storage service is not enabled, skipping storage collector'')" == 0 ]' - ' ]' - ' logger.go:42: 07:10:19 | watcher-cinder/2-deploy-cinder | ++ oc logs -n watcher-kuttl-default watcher-kuttl-decision-engine-0' - ' logger.go:42: 07:10:19 | watcher-cinder/2-deploy-cinder | ++ grep -c ''Block storage service is not enabled, skipping storage collector''' - ' logger.go:42: 07:10:20 | watcher-cinder/2-deploy-cinder | + ''['' 2 == 0 '']''' - ' logger.go:42: 07:10:21 | watcher-cinder/2-deploy-cinder | running command: [sh -c set -euxo pipefail' - ' # check that the decision detects that there is a cinder service and' - ' # does not log that storage collector is skipped' - ' [ "$(oc logs -n $NAMESPACE watcher-kuttl-decision-engine-0 |grep -c ''Block storage service is not enabled, skipping storage collector'')" == 0 ]' - ' ]' - ' logger.go:42: 07:10:21 | watcher-cinder/2-deploy-cinder | ++ grep -c ''Block storage service is not enabled, skipping storage collector''' - ' logger.go:42: 07:10:21 | watcher-cinder/2-deploy-cinder | ++ oc logs -n watcher-kuttl-default watcher-kuttl-decision-engine-0' - ' logger.go:42: 07:10:21 | watcher-cinder/2-deploy-cinder | + ''['' 2 == 0 '']''' - ' logger.go:42: 07:10:22 | watcher-cinder/2-deploy-cinder | running command: [sh -c set -euxo pipefail' - ' # check that the decision detects that there is a cinder service and' - ' # does not log that storage collector is skipped' - ' [ "$(oc logs -n $NAMESPACE watcher-kuttl-decision-engine-0 |grep -c ''Block storage service is not enabled, skipping storage collector'')" == 0 ]' - ' ]' - ' logger.go:42: 07:10:22 | watcher-cinder/2-deploy-cinder | ++ grep -c ''Block storage service is not enabled, skipping storage collector''' - ' logger.go:42: 07:10:22 | watcher-cinder/2-deploy-cinder | ++ oc logs -n watcher-kuttl-default watcher-kuttl-decision-engine-0' - ' logger.go:42: 07:10:22 | watcher-cinder/2-deploy-cinder | + ''['' 2 == 0 '']''' - ' logger.go:42: 07:10:23 | watcher-cinder/2-deploy-cinder | running command: [sh -c set -euxo pipefail' - ' # check that the decision detects that there is a cinder service and' - ' # does not log that storage collector is skipped' - ' [ "$(oc logs -n $NAMESPACE watcher-kuttl-decision-engine-0 |grep -c ''Block storage service is not enabled, skipping storage collector'')" == 0 ]' - ' ]' - ' logger.go:42: 07:10:23 | watcher-cinder/2-deploy-cinder | ++ oc logs -n watcher-kuttl-default watcher-kuttl-decision-engine-0' - ' logger.go:42: 07:10:23 | watcher-cinder/2-deploy-cinder | ++ grep -c ''Block storage service is not enabled, skipping storage collector''' - ' logger.go:42: 07:10:23 | watcher-cinder/2-deploy-cinder | + ''['' 2 == 0 '']''' - ' logger.go:42: 07:10:24 | watcher-cinder/2-deploy-cinder | running command: [sh -c set -euxo pipefail' - ' # check that the decision detects that there is a cinder service and' - ' # does not log that storage collector is skipped' - ' [ "$(oc logs -n $NAMESPACE watcher-kuttl-decision-engine-0 |grep -c ''Block storage service is not enabled, skipping storage collector'')" == 0 ]' - ' ]' - ' logger.go:42: 07:10:24 | watcher-cinder/2-deploy-cinder | ++ oc logs -n watcher-kuttl-default watcher-kuttl-decision-engine-0' - ' logger.go:42: 07:10:24 | watcher-cinder/2-deploy-cinder | ++ grep -c ''Block storage service is not enabled, skipping storage collector''' - ' logger.go:42: 07:10:25 | watcher-cinder/2-deploy-cinder | + ''['' 2 == 0 '']''' - ' logger.go:42: 07:10:26 | watcher-cinder/2-deploy-cinder | running command: [sh -c set -euxo pipefail' - ' # check that the decision detects that there is a cinder service and' - ' # does not log that storage collector is skipped' - ' [ "$(oc logs -n $NAMESPACE watcher-kuttl-decision-engine-0 |grep -c ''Block storage service is not enabled, skipping storage collector'')" == 0 ]' - ' ]' - ' logger.go:42: 07:10:26 | watcher-cinder/2-deploy-cinder | ++ oc logs -n watcher-kuttl-default watcher-kuttl-decision-engine-0' - ' logger.go:42: 07:10:26 | watcher-cinder/2-deploy-cinder | ++ grep -c ''Block storage service is not enabled, skipping storage collector''' - ' logger.go:42: 07:10:26 | watcher-cinder/2-deploy-cinder | + ''['' 2 == 0 '']''' - ' logger.go:42: 07:10:27 | watcher-cinder/2-deploy-cinder | running command: [sh -c set -euxo pipefail' - ' # check that the decision detects that there is a cinder service and' - ' # does not log that storage collector is skipped' - ' [ "$(oc logs -n $NAMESPACE watcher-kuttl-decision-engine-0 |grep -c ''Block storage service is not enabled, skipping storage collector'')" == 0 ]' - ' ]' - ' logger.go:42: 07:10:27 | watcher-cinder/2-deploy-cinder | ++ oc logs -n watcher-kuttl-default watcher-kuttl-decision-engine-0' - ' logger.go:42: 07:10:27 | watcher-cinder/2-deploy-cinder | ++ grep -c ''Block storage service is not enabled, skipping storage collector''' - ' logger.go:42: 07:10:27 | watcher-cinder/2-deploy-cinder | + ''['' 2 == 0 '']''' - ' logger.go:42: 07:10:28 | watcher-cinder/2-deploy-cinder | running command: [sh -c set -euxo pipefail' - ' # check that the decision detects that there is a cinder service and' - ' # does not log that storage collector is skipped' - ' [ "$(oc logs -n $NAMESPACE watcher-kuttl-decision-engine-0 |grep -c ''Block storage service is not enabled, skipping storage collector'')" == 0 ]' - ' ]' - ' logger.go:42: 07:10:28 | watcher-cinder/2-deploy-cinder | ++ oc logs -n watcher-kuttl-default watcher-kuttl-decision-engine-0' - ' logger.go:42: 07:10:28 | watcher-cinder/2-deploy-cinder | ++ grep -c ''Block storage service is not enabled, skipping storage collector''' - ' logger.go:42: 07:10:28 | watcher-cinder/2-deploy-cinder | + ''['' 2 == 0 '']''' - ' logger.go:42: 07:10:29 | watcher-cinder/2-deploy-cinder | running command: [sh -c set -euxo pipefail' - ' # check that the decision detects that there is a cinder service and' - ' # does not log that storage collector is skipped' - ' [ "$(oc logs -n $NAMESPACE watcher-kuttl-decision-engine-0 |grep -c ''Block storage service is not enabled, skipping storage collector'')" == 0 ]' - ' ]' - ' logger.go:42: 07:10:29 | watcher-cinder/2-deploy-cinder | ++ oc logs -n watcher-kuttl-default watcher-kuttl-decision-engine-0' - ' logger.go:42: 07:10:29 | watcher-cinder/2-deploy-cinder | ++ grep -c ''Block storage service is not enabled, skipping storage collector''' - ' logger.go:42: 07:10:29 | watcher-cinder/2-deploy-cinder | + ''['' 2 == 0 '']''' - ' logger.go:42: 07:10:30 | watcher-cinder/2-deploy-cinder | running command: [sh -c set -euxo pipefail' - ' # check that the decision detects that there is a cinder service and' - ' # does not log that storage collector is skipped' - ' [ "$(oc logs -n $NAMESPACE watcher-kuttl-decision-engine-0 |grep -c ''Block storage service is not enabled, skipping storage collector'')" == 0 ]' - ' ]' - ' logger.go:42: 07:10:30 | watcher-cinder/2-deploy-cinder | ++ oc logs -n watcher-kuttl-default watcher-kuttl-decision-engine-0' - ' logger.go:42: 07:10:30 | watcher-cinder/2-deploy-cinder | ++ grep -c ''Block storage service is not enabled, skipping storage collector''' - ' logger.go:42: 07:10:31 | watcher-cinder/2-deploy-cinder | + ''['' 2 == 0 '']''' - ' logger.go:42: 07:10:32 | watcher-cinder/2-deploy-cinder | running command: [sh -c set -euxo pipefail' - ' # check that the decision detects that there is a cinder service and' - ' # does not log that storage collector is skipped' - ' [ "$(oc logs -n $NAMESPACE watcher-kuttl-decision-engine-0 |grep -c ''Block storage service is not enabled, skipping storage collector'')" == 0 ]' - ' ]' - ' logger.go:42: 07:10:32 | watcher-cinder/2-deploy-cinder | ++ oc logs -n watcher-kuttl-default watcher-kuttl-decision-engine-0' - ' logger.go:42: 07:10:32 | watcher-cinder/2-deploy-cinder | ++ grep -c ''Block storage service is not enabled, skipping storage collector''' - ' logger.go:42: 07:10:32 | watcher-cinder/2-deploy-cinder | + ''['' 2 == 0 '']''' - ' logger.go:42: 07:10:33 | watcher-cinder/2-deploy-cinder | running command: [sh -c set -euxo pipefail' - ' # check that the decision detects that there is a cinder service and' - ' # does not log that storage collector is skipped' - ' [ "$(oc logs -n $NAMESPACE watcher-kuttl-decision-engine-0 |grep -c ''Block storage service is not enabled, skipping storage collector'')" == 0 ]' - ' ]' - ' logger.go:42: 07:10:33 | watcher-cinder/2-deploy-cinder | ++ oc logs -n watcher-kuttl-default watcher-kuttl-decision-engine-0' - ' logger.go:42: 07:10:33 | watcher-cinder/2-deploy-cinder | ++ grep -c ''Block storage service is not enabled, skipping storage collector''' - ' logger.go:42: 07:10:33 | watcher-cinder/2-deploy-cinder | + ''['' 2 == 0 '']''' - ' logger.go:42: 07:10:34 | watcher-cinder/2-deploy-cinder | running command: [sh -c set -euxo pipefail' - ' # check that the decision detects that there is a cinder service and' - ' # does not log that storage collector is skipped' - ' [ "$(oc logs -n $NAMESPACE watcher-kuttl-decision-engine-0 |grep -c ''Block storage service is not enabled, skipping storage collector'')" == 0 ]' - ' ]' - ' logger.go:42: 07:10:34 | watcher-cinder/2-deploy-cinder | ++ oc logs -n watcher-kuttl-default watcher-kuttl-decision-engine-0' - ' logger.go:42: 07:10:34 | watcher-cinder/2-deploy-cinder | ++ grep -c ''Block storage service is not enabled, skipping storage collector''' - ' logger.go:42: 07:10:34 | watcher-cinder/2-deploy-cinder | + ''['' 2 == 0 '']''' - ' logger.go:42: 07:10:35 | watcher-cinder/2-deploy-cinder | running command: [sh -c set -euxo pipefail' - ' # check that the decision detects that there is a cinder service and' - ' # does not log that storage collector is skipped' - ' [ "$(oc logs -n $NAMESPACE watcher-kuttl-decision-engine-0 |grep -c ''Block storage service is not enabled, skipping storage collector'')" == 0 ]' - ' ]' - ' logger.go:42: 07:10:35 | watcher-cinder/2-deploy-cinder | ++ oc logs -n watcher-kuttl-default watcher-kuttl-decision-engine-0' - ' logger.go:42: 07:10:35 | watcher-cinder/2-deploy-cinder | ++ grep -c ''Block storage service is not enabled, skipping storage collector''' - ' logger.go:42: 07:10:35 | watcher-cinder/2-deploy-cinder | + ''['' 2 == 0 '']''' - ' logger.go:42: 07:10:36 | watcher-cinder/2-deploy-cinder | running command: [sh -c set -euxo pipefail' - ' # check that the decision detects that there is a cinder service and' - ' # does not log that storage collector is skipped' - ' [ "$(oc logs -n $NAMESPACE watcher-kuttl-decision-engine-0 |grep -c ''Block storage service is not enabled, skipping storage collector'')" == 0 ]' - ' ]' - ' logger.go:42: 07:10:36 | watcher-cinder/2-deploy-cinder | ++ oc logs -n watcher-kuttl-default watcher-kuttl-decision-engine-0' - ' logger.go:42: 07:10:36 | watcher-cinder/2-deploy-cinder | ++ grep -c ''Block storage service is not enabled, skipping storage collector''' - ' logger.go:42: 07:10:37 | watcher-cinder/2-deploy-cinder | + ''['' 2 == 0 '']''' - ' logger.go:42: 07:10:38 | watcher-cinder/2-deploy-cinder | running command: [sh -c set -euxo pipefail' - ' # check that the decision detects that there is a cinder service and' - ' # does not log that storage collector is skipped' - ' [ "$(oc logs -n $NAMESPACE watcher-kuttl-decision-engine-0 |grep -c ''Block storage service is not enabled, skipping storage collector'')" == 0 ]' - ' ]' - ' logger.go:42: 07:10:38 | watcher-cinder/2-deploy-cinder | ++ oc logs -n watcher-kuttl-default watcher-kuttl-decision-engine-0' - ' logger.go:42: 07:10:38 | watcher-cinder/2-deploy-cinder | ++ grep -c ''Block storage service is not enabled, skipping storage collector''' - ' logger.go:42: 07:10:38 | watcher-cinder/2-deploy-cinder | + ''['' 2 == 0 '']''' - ' logger.go:42: 07:10:39 | watcher-cinder/2-deploy-cinder | running command: [sh -c set -euxo pipefail' - ' # check that the decision detects that there is a cinder service and' - ' # does not log that storage collector is skipped' - ' [ "$(oc logs -n $NAMESPACE watcher-kuttl-decision-engine-0 |grep -c ''Block storage service is not enabled, skipping storage collector'')" == 0 ]' - ' ]' - ' logger.go:42: 07:10:39 | watcher-cinder/2-deploy-cinder | ++ oc logs -n watcher-kuttl-default watcher-kuttl-decision-engine-0' - ' logger.go:42: 07:10:39 | watcher-cinder/2-deploy-cinder | ++ grep -c ''Block storage service is not enabled, skipping storage collector''' - ' logger.go:42: 07:10:39 | watcher-cinder/2-deploy-cinder | + ''['' 2 == 0 '']''' - ' logger.go:42: 07:10:40 | watcher-cinder/2-deploy-cinder | running command: [sh -c set -euxo pipefail' - ' # check that the decision detects that there is a cinder service and' - ' # does not log that storage collector is skipped' - ' [ "$(oc logs -n $NAMESPACE watcher-kuttl-decision-engine-0 |grep -c ''Block storage service is not enabled, skipping storage collector'')" == 0 ]' - ' ]' - ' logger.go:42: 07:10:40 | watcher-cinder/2-deploy-cinder | ++ oc logs -n watcher-kuttl-default watcher-kuttl-decision-engine-0' - ' logger.go:42: 07:10:40 | watcher-cinder/2-deploy-cinder | ++ grep -c ''Block storage service is not enabled, skipping storage collector''' - ' logger.go:42: 07:10:40 | watcher-cinder/2-deploy-cinder | + ''['' 2 == 0 '']''' - ' logger.go:42: 07:10:41 | watcher-cinder/2-deploy-cinder | running command: [sh -c set -euxo pipefail' - ' # check that the decision detects that there is a cinder service and' - ' # does not log that storage collector is skipped' - ' [ "$(oc logs -n $NAMESPACE watcher-kuttl-decision-engine-0 |grep -c ''Block storage service is not enabled, skipping storage collector'')" == 0 ]' - ' ]' - ' logger.go:42: 07:10:41 | watcher-cinder/2-deploy-cinder | ++ oc logs -n watcher-kuttl-default watcher-kuttl-decision-engine-0' - ' logger.go:42: 07:10:41 | watcher-cinder/2-deploy-cinder | ++ grep -c ''Block storage service is not enabled, skipping storage collector''' - ' logger.go:42: 07:10:42 | watcher-cinder/2-deploy-cinder | + ''['' 2 == 0 '']''' - ' logger.go:42: 07:10:43 | watcher-cinder/2-deploy-cinder | running command: [sh -c set -euxo pipefail' - ' # check that the decision detects that there is a cinder service and' - ' # does not log that storage collector is skipped' - ' [ "$(oc logs -n $NAMESPACE watcher-kuttl-decision-engine-0 |grep -c ''Block storage service is not enabled, skipping storage collector'')" == 0 ]' - ' ]' - ' logger.go:42: 07:10:43 | watcher-cinder/2-deploy-cinder | ++ oc logs -n watcher-kuttl-default watcher-kuttl-decision-engine-0' - ' logger.go:42: 07:10:43 | watcher-cinder/2-deploy-cinder | ++ grep -c ''Block storage service is not enabled, skipping storage collector''' - ' logger.go:42: 07:10:43 | watcher-cinder/2-deploy-cinder | + ''['' 2 == 0 '']''' - ' logger.go:42: 07:10:44 | watcher-cinder/2-deploy-cinder | running command: [sh -c set -euxo pipefail' - ' # check that the decision detects that there is a cinder service and' - ' # does not log that storage collector is skipped' - ' [ "$(oc logs -n $NAMESPACE watcher-kuttl-decision-engine-0 |grep -c ''Block storage service is not enabled, skipping storage collector'')" == 0 ]' - ' ]' - ' logger.go:42: 07:10:44 | watcher-cinder/2-deploy-cinder | ++ oc logs -n watcher-kuttl-default watcher-kuttl-decision-engine-0' - ' logger.go:42: 07:10:44 | watcher-cinder/2-deploy-cinder | ++ grep -c ''Block storage service is not enabled, skipping storage collector''' - ' logger.go:42: 07:10:44 | watcher-cinder/2-deploy-cinder | + ''['' 2 == 0 '']''' - ' logger.go:42: 07:10:45 | watcher-cinder/2-deploy-cinder | running command: [sh -c set -euxo pipefail' - ' # check that the decision detects that there is a cinder service and' - ' # does not log that storage collector is skipped' - ' [ "$(oc logs -n $NAMESPACE watcher-kuttl-decision-engine-0 |grep -c ''Block storage service is not enabled, skipping storage collector'')" == 0 ]' - ' ]' - ' logger.go:42: 07:10:45 | watcher-cinder/2-deploy-cinder | ++ oc logs -n watcher-kuttl-default watcher-kuttl-decision-engine-0' - ' logger.go:42: 07:10:45 | watcher-cinder/2-deploy-cinder | ++ grep -c ''Block storage service is not enabled, skipping storage collector''' - ' logger.go:42: 07:10:45 | watcher-cinder/2-deploy-cinder | + ''['' 2 == 0 '']''' - ' logger.go:42: 07:10:47 | watcher-cinder/2-deploy-cinder | running command: [sh -c set -euxo pipefail' - ' # check that the decision detects that there is a cinder service and' - ' # does not log that storage collector is skipped' - ' [ "$(oc logs -n $NAMESPACE watcher-kuttl-decision-engine-0 |grep -c ''Block storage service is not enabled, skipping storage collector'')" == 0 ]' - ' ]' - ' logger.go:42: 07:10:47 | watcher-cinder/2-deploy-cinder | ++ oc logs -n watcher-kuttl-default watcher-kuttl-decision-engine-0' - ' logger.go:42: 07:10:47 | watcher-cinder/2-deploy-cinder | ++ grep -c ''Block storage service is not enabled, skipping storage collector''' - ' logger.go:42: 07:10:47 | watcher-cinder/2-deploy-cinder | + ''['' 2 == 0 '']''' - ' logger.go:42: 07:10:48 | watcher-cinder/2-deploy-cinder | running command: [sh -c set -euxo pipefail' - ' # check that the decision detects that there is a cinder service and' - ' # does not log that storage collector is skipped' - ' [ "$(oc logs -n $NAMESPACE watcher-kuttl-decision-engine-0 |grep -c ''Block storage service is not enabled, skipping storage collector'')" == 0 ]' - ' ]' - ' logger.go:42: 07:10:48 | watcher-cinder/2-deploy-cinder | ++ oc logs -n watcher-kuttl-default watcher-kuttl-decision-engine-0' - ' logger.go:42: 07:10:48 | watcher-cinder/2-deploy-cinder | ++ grep -c ''Block storage service is not enabled, skipping storage collector''' - ' logger.go:42: 07:10:48 | watcher-cinder/2-deploy-cinder | + ''['' 2 == 0 '']''' - ' logger.go:42: 07:10:49 | watcher-cinder/2-deploy-cinder | running command: [sh -c set -euxo pipefail' - ' # check that the decision detects that there is a cinder service and' - ' # does not log that storage collector is skipped' - ' [ "$(oc logs -n $NAMESPACE watcher-kuttl-decision-engine-0 |grep -c ''Block storage service is not enabled, skipping storage collector'')" == 0 ]' - ' ]' - ' logger.go:42: 07:10:49 | watcher-cinder/2-deploy-cinder | ++ oc logs -n watcher-kuttl-default watcher-kuttl-decision-engine-0' - ' logger.go:42: 07:10:49 | watcher-cinder/2-deploy-cinder | ++ grep -c ''Block storage service is not enabled, skipping storage collector''' - ' logger.go:42: 07:10:49 | watcher-cinder/2-deploy-cinder | + ''['' 2 == 0 '']''' - ' logger.go:42: 07:10:50 | watcher-cinder/2-deploy-cinder | running command: [sh -c set -euxo pipefail' - ' # check that the decision detects that there is a cinder service and' - ' # does not log that storage collector is skipped' - ' [ "$(oc logs -n $NAMESPACE watcher-kuttl-decision-engine-0 |grep -c ''Block storage service is not enabled, skipping storage collector'')" == 0 ]' - ' ]' - ' logger.go:42: 07:10:50 | watcher-cinder/2-deploy-cinder | ++ oc logs -n watcher-kuttl-default watcher-kuttl-decision-engine-0' - ' logger.go:42: 07:10:50 | watcher-cinder/2-deploy-cinder | ++ grep -c ''Block storage service is not enabled, skipping storage collector''' - ' logger.go:42: 07:10:50 | watcher-cinder/2-deploy-cinder | + ''['' 2 == 0 '']''' - ' logger.go:42: 07:10:51 | watcher-cinder/2-deploy-cinder | running command: [sh -c set -euxo pipefail' - ' # check that the decision detects that there is a cinder service and' - ' # does not log that storage collector is skipped' - ' [ "$(oc logs -n $NAMESPACE watcher-kuttl-decision-engine-0 |grep -c ''Block storage service is not enabled, skipping storage collector'')" == 0 ]' - ' ]' - ' logger.go:42: 07:10:51 | watcher-cinder/2-deploy-cinder | ++ oc logs -n watcher-kuttl-default watcher-kuttl-decision-engine-0' - ' logger.go:42: 07:10:51 | watcher-cinder/2-deploy-cinder | ++ grep -c ''Block storage service is not enabled, skipping storage collector''' - ' logger.go:42: 07:10:51 | watcher-cinder/2-deploy-cinder | + ''['' 2 == 0 '']''' - ' logger.go:42: 07:10:53 | watcher-cinder/2-deploy-cinder | running command: [sh -c set -euxo pipefail' - ' # check that the decision detects that there is a cinder service and' - ' # does not log that storage collector is skipped' - ' [ "$(oc logs -n $NAMESPACE watcher-kuttl-decision-engine-0 |grep -c ''Block storage service is not enabled, skipping storage collector'')" == 0 ]' - ' ]' - ' logger.go:42: 07:10:53 | watcher-cinder/2-deploy-cinder | ++ oc logs -n watcher-kuttl-default watcher-kuttl-decision-engine-0' - ' logger.go:42: 07:10:53 | watcher-cinder/2-deploy-cinder | ++ grep -c ''Block storage service is not enabled, skipping storage collector''' - ' logger.go:42: 07:10:53 | watcher-cinder/2-deploy-cinder | + ''['' 2 == 0 '']''' - ' logger.go:42: 07:10:54 | watcher-cinder/2-deploy-cinder | running command: [sh -c set -euxo pipefail' - ' # check that the decision detects that there is a cinder service and' - ' # does not log that storage collector is skipped' - ' [ "$(oc logs -n $NAMESPACE watcher-kuttl-decision-engine-0 |grep -c ''Block storage service is not enabled, skipping storage collector'')" == 0 ]' - ' ]' - ' logger.go:42: 07:10:54 | watcher-cinder/2-deploy-cinder | ++ oc logs -n watcher-kuttl-default watcher-kuttl-decision-engine-0' - ' logger.go:42: 07:10:54 | watcher-cinder/2-deploy-cinder | ++ grep -c ''Block storage service is not enabled, skipping storage collector''' - ' logger.go:42: 07:10:54 | watcher-cinder/2-deploy-cinder | + ''['' 2 == 0 '']''' - ' logger.go:42: 07:10:55 | watcher-cinder/2-deploy-cinder | running command: [sh -c set -euxo pipefail' - ' # check that the decision detects that there is a cinder service and' - ' # does not log that storage collector is skipped' - ' [ "$(oc logs -n $NAMESPACE watcher-kuttl-decision-engine-0 |grep -c ''Block storage service is not enabled, skipping storage collector'')" == 0 ]' - ' ]' - ' logger.go:42: 07:10:55 | watcher-cinder/2-deploy-cinder | ++ oc logs -n watcher-kuttl-default watcher-kuttl-decision-engine-0' - ' logger.go:42: 07:10:55 | watcher-cinder/2-deploy-cinder | ++ grep -c ''Block storage service is not enabled, skipping storage collector''' - ' logger.go:42: 07:10:55 | watcher-cinder/2-deploy-cinder | + ''['' 2 == 0 '']''' - ' logger.go:42: 07:10:56 | watcher-cinder/2-deploy-cinder | running command: [sh -c set -euxo pipefail' - ' # check that the decision detects that there is a cinder service and' - ' # does not log that storage collector is skipped' - ' [ "$(oc logs -n $NAMESPACE watcher-kuttl-decision-engine-0 |grep -c ''Block storage service is not enabled, skipping storage collector'')" == 0 ]' - ' ]' - ' logger.go:42: 07:10:56 | watcher-cinder/2-deploy-cinder | ++ oc logs -n watcher-kuttl-default watcher-kuttl-decision-engine-0' - ' logger.go:42: 07:10:56 | watcher-cinder/2-deploy-cinder | ++ grep -c ''Block storage service is not enabled, skipping storage collector''' - ' logger.go:42: 07:10:56 | watcher-cinder/2-deploy-cinder | Error from server (BadRequest): container "watcher-decision-engine" in pod "watcher-kuttl-decision-engine-0" is waiting to start: ContainerCreating' - ' logger.go:42: 07:10:56 | watcher-cinder/2-deploy-cinder | + ''['' 0 == 0 '']''' - ' logger.go:42: 07:10:57 | watcher-cinder/2-deploy-cinder | running command: [sh -c set -euxo pipefail' - ' # check that the decision detects that there is a cinder service and' - ' # does not log that storage collector is skipped' - ' [ "$(oc logs -n $NAMESPACE watcher-kuttl-decision-engine-0 |grep -c ''Block storage service is not enabled, skipping storage collector'')" == 0 ]' - ' ]' - ' logger.go:42: 07:10:57 | watcher-cinder/2-deploy-cinder | ++ oc logs -n watcher-kuttl-default watcher-kuttl-decision-engine-0' - ' logger.go:42: 07:10:57 | watcher-cinder/2-deploy-cinder | ++ grep -c ''Block storage service is not enabled, skipping storage collector''' - ' logger.go:42: 07:10:58 | watcher-cinder/2-deploy-cinder | Error from server (BadRequest): container "watcher-decision-engine" in pod "watcher-kuttl-decision-engine-0" is waiting to start: ContainerCreating' - ' logger.go:42: 07:10:58 | watcher-cinder/2-deploy-cinder | + ''['' 0 == 0 '']''' - ' logger.go:42: 07:10:59 | watcher-cinder/2-deploy-cinder | running command: [sh -c set -euxo pipefail' - ' # check that the decision detects that there is a cinder service and' - ' # does not log that storage collector is skipped' - ' [ "$(oc logs -n $NAMESPACE watcher-kuttl-decision-engine-0 |grep -c ''Block storage service is not enabled, skipping storage collector'')" == 0 ]' - ' ]' - ' logger.go:42: 07:10:59 | watcher-cinder/2-deploy-cinder | ++ oc logs -n watcher-kuttl-default watcher-kuttl-decision-engine-0' - ' logger.go:42: 07:10:59 | watcher-cinder/2-deploy-cinder | ++ grep -c ''Block storage service is not enabled, skipping storage collector''' - ' logger.go:42: 07:10:59 | watcher-cinder/2-deploy-cinder | + ''['' 0 == 0 '']''' - ' logger.go:42: 07:11:00 | watcher-cinder/2-deploy-cinder | running command: [sh -c set -euxo pipefail' - ' # check that the decision detects that there is a cinder service and' - ' # does not log that storage collector is skipped' - ' [ "$(oc logs -n $NAMESPACE watcher-kuttl-decision-engine-0 |grep -c ''Block storage service is not enabled, skipping storage collector'')" == 0 ]' - ' ]' - ' logger.go:42: 07:11:00 | watcher-cinder/2-deploy-cinder | ++ oc logs -n watcher-kuttl-default watcher-kuttl-decision-engine-0' - ' logger.go:42: 07:11:00 | watcher-cinder/2-deploy-cinder | ++ grep -c ''Block storage service is not enabled, skipping storage collector''' - ' logger.go:42: 07:11:00 | watcher-cinder/2-deploy-cinder | + ''['' 0 == 0 '']''' - ' logger.go:42: 07:11:01 | watcher-cinder/2-deploy-cinder | running command: [sh -c set -euxo pipefail' - ' # check that the decision detects that there is a cinder service and' - ' # does not log that storage collector is skipped' - ' [ "$(oc logs -n $NAMESPACE watcher-kuttl-decision-engine-0 |grep -c ''Block storage service is not enabled, skipping storage collector'')" == 0 ]' - ' ]' - ' logger.go:42: 07:11:01 | watcher-cinder/2-deploy-cinder | ++ oc logs -n watcher-kuttl-default watcher-kuttl-decision-engine-0' - ' logger.go:42: 07:11:01 | watcher-cinder/2-deploy-cinder | ++ grep -c ''Block storage service is not enabled, skipping storage collector''' - ' logger.go:42: 07:11:01 | watcher-cinder/2-deploy-cinder | + ''['' 0 == 0 '']''' - ' logger.go:42: 07:11:02 | watcher-cinder/2-deploy-cinder | running command: [sh -c set -euxo pipefail' - ' # check that the decision detects that there is a cinder service and' - ' # does not log that storage collector is skipped' - ' [ "$(oc logs -n $NAMESPACE watcher-kuttl-decision-engine-0 |grep -c ''Block storage service is not enabled, skipping storage collector'')" == 0 ]' - ' ]' - ' logger.go:42: 07:11:02 | watcher-cinder/2-deploy-cinder | ++ oc logs -n watcher-kuttl-default watcher-kuttl-decision-engine-0' - ' logger.go:42: 07:11:02 | watcher-cinder/2-deploy-cinder | ++ grep -c ''Block storage service is not enabled, skipping storage collector''' - ' logger.go:42: 07:11:02 | watcher-cinder/2-deploy-cinder | + ''['' 0 == 0 '']''' - ' logger.go:42: 07:11:03 | watcher-cinder/2-deploy-cinder | running command: [sh -c set -euxo pipefail' - ' # check that the decision detects that there is a cinder service and' - ' # does not log that storage collector is skipped' - ' [ "$(oc logs -n $NAMESPACE watcher-kuttl-decision-engine-0 |grep -c ''Block storage service is not enabled, skipping storage collector'')" == 0 ]' - ' ]' - ' logger.go:42: 07:11:03 | watcher-cinder/2-deploy-cinder | ++ oc logs -n watcher-kuttl-default watcher-kuttl-decision-engine-0' - ' logger.go:42: 07:11:03 | watcher-cinder/2-deploy-cinder | ++ grep -c ''Block storage service is not enabled, skipping storage collector''' - ' logger.go:42: 07:11:04 | watcher-cinder/2-deploy-cinder | + ''['' 0 == 0 '']''' - ' logger.go:42: 07:11:05 | watcher-cinder/2-deploy-cinder | running command: [sh -c set -euxo pipefail' - ' # check that the decision detects that there is a cinder service and' - ' # does not log that storage collector is skipped' - ' [ "$(oc logs -n $NAMESPACE watcher-kuttl-decision-engine-0 |grep -c ''Block storage service is not enabled, skipping storage collector'')" == 0 ]' - ' ]' - ' logger.go:42: 07:11:05 | watcher-cinder/2-deploy-cinder | ++ oc logs -n watcher-kuttl-default watcher-kuttl-decision-engine-0' - ' logger.go:42: 07:11:05 | watcher-cinder/2-deploy-cinder | ++ grep -c ''Block storage service is not enabled, skipping storage collector''' - ' logger.go:42: 07:11:05 | watcher-cinder/2-deploy-cinder | + ''['' 0 == 0 '']''' - ' logger.go:42: 07:11:06 | watcher-cinder/2-deploy-cinder | running command: [sh -c set -euxo pipefail' - ' # check that the decision detects that there is a cinder service and' - ' # does not log that storage collector is skipped' - ' [ "$(oc logs -n $NAMESPACE watcher-kuttl-decision-engine-0 |grep -c ''Block storage service is not enabled, skipping storage collector'')" == 0 ]' - ' ]' - ' logger.go:42: 07:11:06 | watcher-cinder/2-deploy-cinder | ++ oc logs -n watcher-kuttl-default watcher-kuttl-decision-engine-0' - ' logger.go:42: 07:11:06 | watcher-cinder/2-deploy-cinder | ++ grep -c ''Block storage service is not enabled, skipping storage collector''' - ' logger.go:42: 07:11:06 | watcher-cinder/2-deploy-cinder | + ''['' 0 == 0 '']''' - ' logger.go:42: 07:11:07 | watcher-cinder/2-deploy-cinder | running command: [sh -c set -euxo pipefail' - ' # check that the decision detects that there is a cinder service and' - ' # does not log that storage collector is skipped' - ' [ "$(oc logs -n $NAMESPACE watcher-kuttl-decision-engine-0 |grep -c ''Block storage service is not enabled, skipping storage collector'')" == 0 ]' - ' ]' - ' logger.go:42: 07:11:07 | watcher-cinder/2-deploy-cinder | ++ oc logs -n watcher-kuttl-default watcher-kuttl-decision-engine-0' - ' logger.go:42: 07:11:07 | watcher-cinder/2-deploy-cinder | ++ grep -c ''Block storage service is not enabled, skipping storage collector''' - ' logger.go:42: 07:11:07 | watcher-cinder/2-deploy-cinder | + ''['' 0 == 0 '']''' - ' logger.go:42: 07:11:08 | watcher-cinder/2-deploy-cinder | running command: [sh -c set -euxo pipefail' - ' # check that the decision detects that there is a cinder service and' - ' # does not log that storage collector is skipped' - ' [ "$(oc logs -n $NAMESPACE watcher-kuttl-decision-engine-0 |grep -c ''Block storage service is not enabled, skipping storage collector'')" == 0 ]' - ' ]' - ' logger.go:42: 07:11:08 | watcher-cinder/2-deploy-cinder | ++ oc logs -n watcher-kuttl-default watcher-kuttl-decision-engine-0' - ' logger.go:42: 07:11:08 | watcher-cinder/2-deploy-cinder | ++ grep -c ''Block storage service is not enabled, skipping storage collector''' - ' logger.go:42: 07:11:09 | watcher-cinder/2-deploy-cinder | + ''['' 0 == 0 '']''' - ' logger.go:42: 07:11:09 | watcher-cinder/2-deploy-cinder | test step completed 2-deploy-cinder' - ' logger.go:42: 07:11:09 | watcher-cinder/3-remove-cinder | starting test step 3-remove-cinder' - ' logger.go:42: 07:11:09 | watcher-cinder/3-remove-cinder | OpenStackControlPlane:watcher-kuttl-default/openstack updated' - ' logger.go:42: 07:11:09 | watcher-cinder/3-remove-cinder | running command: [sh -c set -euxo pipefail' - ' # check that the decision engine correctly detects that there is no cinder service' - ' [ "$(oc logs -n $NAMESPACE watcher-kuttl-decision-engine-0 |grep -c ''Block storage service is not enabled, skipping storage collector'')" == 2 ]' - ' ]' - ' logger.go:42: 07:11:09 | watcher-cinder/3-remove-cinder | ++ oc logs -n watcher-kuttl-default watcher-kuttl-decision-engine-0' - ' logger.go:42: 07:11:09 | watcher-cinder/3-remove-cinder | ++ grep -c ''Block storage service is not enabled, skipping storage collector''' - ' logger.go:42: 07:11:09 | watcher-cinder/3-remove-cinder | + ''['' 0 == 2 '']''' - ' logger.go:42: 07:11:10 | watcher-cinder/3-remove-cinder | running command: [sh -c set -euxo pipefail' - ' # check that the decision engine correctly detects that there is no cinder service' - ' [ "$(oc logs -n $NAMESPACE watcher-kuttl-decision-engine-0 |grep -c ''Block storage service is not enabled, skipping storage collector'')" == 2 ]' - ' ]' - ' logger.go:42: 07:11:10 | watcher-cinder/3-remove-cinder | ++ oc logs -n watcher-kuttl-default watcher-kuttl-decision-engine-0' - ' logger.go:42: 07:11:10 | watcher-cinder/3-remove-cinder | ++ grep -c ''Block storage service is not enabled, skipping storage collector''' - ' logger.go:42: 07:11:10 | watcher-cinder/3-remove-cinder | + ''['' 0 == 2 '']''' - ' logger.go:42: 07:11:11 | watcher-cinder/3-remove-cinder | running command: [sh -c set -euxo pipefail' - ' # check that the decision engine correctly detects that there is no cinder service' - ' [ "$(oc logs -n $NAMESPACE watcher-kuttl-decision-engine-0 |grep -c ''Block storage service is not enabled, skipping storage collector'')" == 2 ]' - ' ]' - ' logger.go:42: 07:11:11 | watcher-cinder/3-remove-cinder | ++ oc logs -n watcher-kuttl-default watcher-kuttl-decision-engine-0' - ' logger.go:42: 07:11:11 | watcher-cinder/3-remove-cinder | ++ grep -c ''Block storage service is not enabled, skipping storage collector''' - ' logger.go:42: 07:11:11 | watcher-cinder/3-remove-cinder | + ''['' 0 == 2 '']''' - ' logger.go:42: 07:11:12 | watcher-cinder/3-remove-cinder | running command: [sh -c set -euxo pipefail' - ' # check that the decision engine correctly detects that there is no cinder service' - ' [ "$(oc logs -n $NAMESPACE watcher-kuttl-decision-engine-0 |grep -c ''Block storage service is not enabled, skipping storage collector'')" == 2 ]' - ' ]' - ' logger.go:42: 07:11:12 | watcher-cinder/3-remove-cinder | ++ oc logs -n watcher-kuttl-default watcher-kuttl-decision-engine-0' - ' logger.go:42: 07:11:12 | watcher-cinder/3-remove-cinder | ++ grep -c ''Block storage service is not enabled, skipping storage collector''' - ' logger.go:42: 07:11:13 | watcher-cinder/3-remove-cinder | + ''['' 0 == 2 '']''' - ' logger.go:42: 07:11:14 | watcher-cinder/3-remove-cinder | running command: [sh -c set -euxo pipefail' - ' # check that the decision engine correctly detects that there is no cinder service' - ' [ "$(oc logs -n $NAMESPACE watcher-kuttl-decision-engine-0 |grep -c ''Block storage service is not enabled, skipping storage collector'')" == 2 ]' - ' ]' - ' logger.go:42: 07:11:14 | watcher-cinder/3-remove-cinder | ++ oc logs -n watcher-kuttl-default watcher-kuttl-decision-engine-0' - ' logger.go:42: 07:11:14 | watcher-cinder/3-remove-cinder | ++ grep -c ''Block storage service is not enabled, skipping storage collector''' - ' logger.go:42: 07:11:14 | watcher-cinder/3-remove-cinder | + ''['' 0 == 2 '']''' - ' logger.go:42: 07:11:15 | watcher-cinder/3-remove-cinder | running command: [sh -c set -euxo pipefail' - ' # check that the decision engine correctly detects that there is no cinder service' - ' [ "$(oc logs -n $NAMESPACE watcher-kuttl-decision-engine-0 |grep -c ''Block storage service is not enabled, skipping storage collector'')" == 2 ]' - ' ]' - ' logger.go:42: 07:11:15 | watcher-cinder/3-remove-cinder | ++ oc logs -n watcher-kuttl-default watcher-kuttl-decision-engine-0' - ' logger.go:42: 07:11:15 | watcher-cinder/3-remove-cinder | ++ grep -c ''Block storage service is not enabled, skipping storage collector''' - ' logger.go:42: 07:11:15 | watcher-cinder/3-remove-cinder | + ''['' 0 == 2 '']''' - ' logger.go:42: 07:11:16 | watcher-cinder/3-remove-cinder | running command: [sh -c set -euxo pipefail' - ' # check that the decision engine correctly detects that there is no cinder service' - ' [ "$(oc logs -n $NAMESPACE watcher-kuttl-decision-engine-0 |grep -c ''Block storage service is not enabled, skipping storage collector'')" == 2 ]' - ' ]' - ' logger.go:42: 07:11:16 | watcher-cinder/3-remove-cinder | ++ oc logs -n watcher-kuttl-default watcher-kuttl-decision-engine-0' - ' logger.go:42: 07:11:16 | watcher-cinder/3-remove-cinder | ++ grep -c ''Block storage service is not enabled, skipping storage collector''' - ' logger.go:42: 07:11:16 | watcher-cinder/3-remove-cinder | + ''['' 0 == 2 '']''' - ' logger.go:42: 07:11:17 | watcher-cinder/3-remove-cinder | running command: [sh -c set -euxo pipefail' - ' # check that the decision engine correctly detects that there is no cinder service' - ' [ "$(oc logs -n $NAMESPACE watcher-kuttl-decision-engine-0 |grep -c ''Block storage service is not enabled, skipping storage collector'')" == 2 ]' - ' ]' - ' logger.go:42: 07:11:17 | watcher-cinder/3-remove-cinder | ++ oc logs -n watcher-kuttl-default watcher-kuttl-decision-engine-0' - ' logger.go:42: 07:11:17 | watcher-cinder/3-remove-cinder | ++ grep -c ''Block storage service is not enabled, skipping storage collector''' - ' logger.go:42: 07:11:17 | watcher-cinder/3-remove-cinder | + ''['' 0 == 2 '']''' - ' logger.go:42: 07:11:18 | watcher-cinder/3-remove-cinder | running command: [sh -c set -euxo pipefail' - ' # check that the decision engine correctly detects that there is no cinder service' - ' [ "$(oc logs -n $NAMESPACE watcher-kuttl-decision-engine-0 |grep -c ''Block storage service is not enabled, skipping storage collector'')" == 2 ]' - ' ]' - ' logger.go:42: 07:11:18 | watcher-cinder/3-remove-cinder | ++ oc logs -n watcher-kuttl-default watcher-kuttl-decision-engine-0' - ' logger.go:42: 07:11:18 | watcher-cinder/3-remove-cinder | ++ grep -c ''Block storage service is not enabled, skipping storage collector''' - ' logger.go:42: 07:11:19 | watcher-cinder/3-remove-cinder | + ''['' 0 == 2 '']''' - ' logger.go:42: 07:11:20 | watcher-cinder/3-remove-cinder | running command: [sh -c set -euxo pipefail' - ' # check that the decision engine correctly detects that there is no cinder service' - ' [ "$(oc logs -n $NAMESPACE watcher-kuttl-decision-engine-0 |grep -c ''Block storage service is not enabled, skipping storage collector'')" == 2 ]' - ' ]' - ' logger.go:42: 07:11:20 | watcher-cinder/3-remove-cinder | ++ oc logs -n watcher-kuttl-default watcher-kuttl-decision-engine-0' - ' logger.go:42: 07:11:20 | watcher-cinder/3-remove-cinder | ++ grep -c ''Block storage service is not enabled, skipping storage collector''' - ' logger.go:42: 07:11:20 | watcher-cinder/3-remove-cinder | + ''['' 0 == 2 '']''' - ' logger.go:42: 07:11:21 | watcher-cinder/3-remove-cinder | running command: [sh -c set -euxo pipefail' - ' # check that the decision engine correctly detects that there is no cinder service' - ' [ "$(oc logs -n $NAMESPACE watcher-kuttl-decision-engine-0 |grep -c ''Block storage service is not enabled, skipping storage collector'')" == 2 ]' - ' ]' - ' logger.go:42: 07:11:21 | watcher-cinder/3-remove-cinder | ++ oc logs -n watcher-kuttl-default watcher-kuttl-decision-engine-0' - ' logger.go:42: 07:11:21 | watcher-cinder/3-remove-cinder | ++ grep -c ''Block storage service is not enabled, skipping storage collector''' - ' logger.go:42: 07:11:21 | watcher-cinder/3-remove-cinder | + ''['' 0 == 2 '']''' - ' logger.go:42: 07:11:22 | watcher-cinder/3-remove-cinder | running command: [sh -c set -euxo pipefail' - ' # check that the decision engine correctly detects that there is no cinder service' - ' [ "$(oc logs -n $NAMESPACE watcher-kuttl-decision-engine-0 |grep -c ''Block storage service is not enabled, skipping storage collector'')" == 2 ]' - ' ]' - ' logger.go:42: 07:11:22 | watcher-cinder/3-remove-cinder | ++ oc logs -n watcher-kuttl-default watcher-kuttl-decision-engine-0' - ' logger.go:42: 07:11:22 | watcher-cinder/3-remove-cinder | ++ grep -c ''Block storage service is not enabled, skipping storage collector''' - ' logger.go:42: 07:11:22 | watcher-cinder/3-remove-cinder | + ''['' 0 == 2 '']''' - ' logger.go:42: 07:11:23 | watcher-cinder/3-remove-cinder | running command: [sh -c set -euxo pipefail' - ' # check that the decision engine correctly detects that there is no cinder service' - ' [ "$(oc logs -n $NAMESPACE watcher-kuttl-decision-engine-0 |grep -c ''Block storage service is not enabled, skipping storage collector'')" == 2 ]' - ' ]' - ' logger.go:42: 07:11:23 | watcher-cinder/3-remove-cinder | ++ oc logs -n watcher-kuttl-default watcher-kuttl-decision-engine-0' - ' logger.go:42: 07:11:23 | watcher-cinder/3-remove-cinder | ++ grep -c ''Block storage service is not enabled, skipping storage collector''' - ' logger.go:42: 07:11:23 | watcher-cinder/3-remove-cinder | Error from server (BadRequest): container "watcher-decision-engine" in pod "watcher-kuttl-decision-engine-0" is waiting to start: ContainerCreating' - ' logger.go:42: 07:11:23 | watcher-cinder/3-remove-cinder | + ''['' 0 == 2 '']''' - ' logger.go:42: 07:11:24 | watcher-cinder/3-remove-cinder | running command: [sh -c set -euxo pipefail' - ' # check that the decision engine correctly detects that there is no cinder service' - ' [ "$(oc logs -n $NAMESPACE watcher-kuttl-decision-engine-0 |grep -c ''Block storage service is not enabled, skipping storage collector'')" == 2 ]' - ' ]' - ' logger.go:42: 07:11:24 | watcher-cinder/3-remove-cinder | ++ oc logs -n watcher-kuttl-default watcher-kuttl-decision-engine-0' - ' logger.go:42: 07:11:24 | watcher-cinder/3-remove-cinder | ++ grep -c ''Block storage service is not enabled, skipping storage collector''' - ' logger.go:42: 07:11:25 | watcher-cinder/3-remove-cinder | + ''['' 0 == 2 '']''' - ' logger.go:42: 07:11:26 | watcher-cinder/3-remove-cinder | running command: [sh -c set -euxo pipefail' - ' # check that the decision engine correctly detects that there is no cinder service' - ' [ "$(oc logs -n $NAMESPACE watcher-kuttl-decision-engine-0 |grep -c ''Block storage service is not enabled, skipping storage collector'')" == 2 ]' - ' ]' - ' logger.go:42: 07:11:26 | watcher-cinder/3-remove-cinder | ++ oc logs -n watcher-kuttl-default watcher-kuttl-decision-engine-0' - ' logger.go:42: 07:11:26 | watcher-cinder/3-remove-cinder | ++ grep -c ''Block storage service is not enabled, skipping storage collector''' - ' logger.go:42: 07:11:26 | watcher-cinder/3-remove-cinder | + ''['' 0 == 2 '']''' - ' logger.go:42: 07:11:27 | watcher-cinder/3-remove-cinder | running command: [sh -c set -euxo pipefail' - ' # check that the decision engine correctly detects that there is no cinder service' - ' [ "$(oc logs -n $NAMESPACE watcher-kuttl-decision-engine-0 |grep -c ''Block storage service is not enabled, skipping storage collector'')" == 2 ]' - ' ]' - ' logger.go:42: 07:11:27 | watcher-cinder/3-remove-cinder | ++ oc logs -n watcher-kuttl-default watcher-kuttl-decision-engine-0' - ' logger.go:42: 07:11:27 | watcher-cinder/3-remove-cinder | ++ grep -c ''Block storage service is not enabled, skipping storage collector''' - ' logger.go:42: 07:11:27 | watcher-cinder/3-remove-cinder | + ''['' 1 == 2 '']''' - ' logger.go:42: 07:11:28 | watcher-cinder/3-remove-cinder | running command: [sh -c set -euxo pipefail' - ' # check that the decision engine correctly detects that there is no cinder service' - ' [ "$(oc logs -n $NAMESPACE watcher-kuttl-decision-engine-0 |grep -c ''Block storage service is not enabled, skipping storage collector'')" == 2 ]' - ' ]' - ' logger.go:42: 07:11:28 | watcher-cinder/3-remove-cinder | ++ oc logs -n watcher-kuttl-default watcher-kuttl-decision-engine-0' - ' logger.go:42: 07:11:28 | watcher-cinder/3-remove-cinder | ++ grep -c ''Block storage service is not enabled, skipping storage collector''' - ' logger.go:42: 07:11:28 | watcher-cinder/3-remove-cinder | + ''['' 2 == 2 '']''' - ' logger.go:42: 07:11:29 | watcher-cinder/3-remove-cinder | running command: [sh -c set -euxo pipefail' - ' # check that the decision engine correctly detects that there is no cinder service' - ' [ "$(oc logs -n $NAMESPACE watcher-kuttl-decision-engine-0 |grep -c ''Block storage service is not enabled, skipping storage collector'')" == 2 ]' - ' ]' - ' logger.go:42: 07:11:29 | watcher-cinder/3-remove-cinder | ++ oc logs -n watcher-kuttl-default watcher-kuttl-decision-engine-0' - ' logger.go:42: 07:11:29 | watcher-cinder/3-remove-cinder | ++ grep -c ''Block storage service is not enabled, skipping storage collector''' - ' logger.go:42: 07:11:30 | watcher-cinder/3-remove-cinder | + ''['' 2 == 2 '']''' - ' logger.go:42: 07:11:31 | watcher-cinder/3-remove-cinder | running command: [sh -c set -euxo pipefail' - ' # check that the decision engine correctly detects that there is no cinder service' - ' [ "$(oc logs -n $NAMESPACE watcher-kuttl-decision-engine-0 |grep -c ''Block storage service is not enabled, skipping storage collector'')" == 2 ]' - ' ]' - ' logger.go:42: 07:11:31 | watcher-cinder/3-remove-cinder | ++ oc logs -n watcher-kuttl-default watcher-kuttl-decision-engine-0' - ' logger.go:42: 07:11:31 | watcher-cinder/3-remove-cinder | ++ grep -c ''Block storage service is not enabled, skipping storage collector''' - ' logger.go:42: 07:11:31 | watcher-cinder/3-remove-cinder | + ''['' 2 == 2 '']''' - ' logger.go:42: 07:11:32 | watcher-cinder/3-remove-cinder | running command: [sh -c set -euxo pipefail' - ' # check that the decision engine correctly detects that there is no cinder service' - ' [ "$(oc logs -n $NAMESPACE watcher-kuttl-decision-engine-0 |grep -c ''Block storage service is not enabled, skipping storage collector'')" == 2 ]' - ' ]' - ' logger.go:42: 07:11:32 | watcher-cinder/3-remove-cinder | ++ oc logs -n watcher-kuttl-default watcher-kuttl-decision-engine-0' - ' logger.go:42: 07:11:32 | watcher-cinder/3-remove-cinder | ++ grep -c ''Block storage service is not enabled, skipping storage collector''' - ' logger.go:42: 07:11:32 | watcher-cinder/3-remove-cinder | + ''['' 2 == 2 '']''' - ' logger.go:42: 07:11:33 | watcher-cinder/3-remove-cinder | running command: [sh -c set -euxo pipefail' - ' # check that the decision engine correctly detects that there is no cinder service' - ' [ "$(oc logs -n $NAMESPACE watcher-kuttl-decision-engine-0 |grep -c ''Block storage service is not enabled, skipping storage collector'')" == 2 ]' - ' ]' - ' logger.go:42: 07:11:33 | watcher-cinder/3-remove-cinder | ++ oc logs -n watcher-kuttl-default watcher-kuttl-decision-engine-0' - ' logger.go:42: 07:11:33 | watcher-cinder/3-remove-cinder | ++ grep -c ''Block storage service is not enabled, skipping storage collector''' - ' logger.go:42: 07:11:33 | watcher-cinder/3-remove-cinder | + ''['' 2 == 2 '']''' - ' logger.go:42: 07:11:34 | watcher-cinder/3-remove-cinder | running command: [sh -c set -euxo pipefail' - ' # check that the decision engine correctly detects that there is no cinder service' - ' [ "$(oc logs -n $NAMESPACE watcher-kuttl-decision-engine-0 |grep -c ''Block storage service is not enabled, skipping storage collector'')" == 2 ]' - ' ]' - ' logger.go:42: 07:11:34 | watcher-cinder/3-remove-cinder | ++ oc logs -n watcher-kuttl-default watcher-kuttl-decision-engine-0' - ' logger.go:42: 07:11:34 | watcher-cinder/3-remove-cinder | ++ grep -c ''Block storage service is not enabled, skipping storage collector''' - ' logger.go:42: 07:11:34 | watcher-cinder/3-remove-cinder | + ''['' 2 == 2 '']''' - ' logger.go:42: 07:11:34 | watcher-cinder/3-remove-cinder | test step completed 3-remove-cinder' - ' logger.go:42: 07:11:34 | watcher-cinder/4-cleanup-watcher | starting test step 4-cleanup-watcher' - ' logger.go:42: 07:11:43 | watcher-cinder/4-cleanup-watcher | test step completed 4-cleanup-watcher' - ' logger.go:42: 07:11:43 | watcher-cinder | skipping kubernetes event logging' - === CONT kuttl/harness/watcher - ' logger.go:42: 07:11:43 | watcher | Skipping creation of user-supplied namespace: watcher-kuttl-default' - ' logger.go:42: 07:11:43 | watcher/0-cleanup-watcher | starting test step 0-cleanup-watcher' - ' logger.go:42: 07:11:43 | watcher/0-cleanup-watcher | test step completed 0-cleanup-watcher' - ' logger.go:42: 07:11:43 | watcher/1-deploy-with-defaults | starting test step 1-deploy-with-defaults' - ' logger.go:42: 07:11:43 | watcher/1-deploy-with-defaults | Watcher:watcher-kuttl-default/watcher-kuttl created' - ' logger.go:42: 07:11:43 | watcher/1-deploy-with-defaults | running command: [sh -c set -euxo pipefail' - ' oc exec -n watcher-kuttl-default openstackclient -- openstack service list -f value -c Name -c Type |[ $(grep -c ^watcher) == 1 ]' - ' SERVICEID=$(oc exec -n watcher-kuttl-default openstackclient -- openstack service list -f value -c Name -c Type -c ID | grep watcher| awk ''{print $1}'')' - ' [ $(oc get -n watcher-kuttl-default keystoneservice watcher -o jsonpath={.status.serviceID}) == ${SERVICEID} ]' - ' [ -n "$(oc get -n watcher-kuttl-default watcher watcher-kuttl -o jsonpath={.status.hash.dbsync})" ]' - ' [ "$(oc get -n watcher-kuttl-default secret watcher-kuttl-api-config-data -o jsonpath=''{.data.my\.cnf}''|base64 -d|grep -c ''ssl=1'')" == 1 ]' - ' [ "$(oc get -n watcher-kuttl-default secret watcher-kuttl-api-config-data -o jsonpath=''{.data.00-default\.conf}''|base64 -d|grep -c ''cafile = /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem'')" == 2 ]' - ' # If we are running the container locally, skip following test' - ' if [ "$(oc get pods -n openstack-operators -o name -l openstack.org/operator-name=watcher)" == "" ]; then' - ' exit 0' - ' fi' - ' env_variables=$(oc set env $(oc get pods -n openstack-operators -o name -l openstack.org/operator-name=watcher) -n openstack-operators --list)' - ' counter=0' - ' for i in ${env_variables}; do' - ' if echo ${i} | grep ''_URL_DEFAULT'' &> /dev/null; then' - ' echo ${i}' - ' counter=$((counter + 1))' - ' fi' - ' done' - ' if [ ${counter} -lt 3 ]; then' - ' echo "Error: Less than 3 _URL_DEFAULT variables found."' - ' exit 1' - ' else' - ' echo "Success: ${counter} _URL_DEFAULT variables found."' - ' fi' - ' ]' - ' logger.go:42: 07:11:43 | watcher/1-deploy-with-defaults | + oc exec -n watcher-kuttl-default openstackclient -- openstack service list -f value -c Name -c Type' - ' logger.go:42: 07:11:43 | watcher/1-deploy-with-defaults | ++ grep -c ''^watcher''' - ' logger.go:42: 07:11:46 | watcher/1-deploy-with-defaults | + ''['' 0 == 1 '']''' - ' logger.go:42: 07:11:47 | watcher/1-deploy-with-defaults | running command: [sh -c set -euxo pipefail' - ' oc exec -n watcher-kuttl-default openstackclient -- openstack service list -f value -c Name -c Type |[ $(grep -c ^watcher) == 1 ]' - ' SERVICEID=$(oc exec -n watcher-kuttl-default openstackclient -- openstack service list -f value -c Name -c Type -c ID | grep watcher| awk ''{print $1}'')' - ' [ $(oc get -n watcher-kuttl-default keystoneservice watcher -o jsonpath={.status.serviceID}) == ${SERVICEID} ]' - ' [ -n "$(oc get -n watcher-kuttl-default watcher watcher-kuttl -o jsonpath={.status.hash.dbsync})" ]' - ' [ "$(oc get -n watcher-kuttl-default secret watcher-kuttl-api-config-data -o jsonpath=''{.data.my\.cnf}''|base64 -d|grep -c ''ssl=1'')" == 1 ]' - ' [ "$(oc get -n watcher-kuttl-default secret watcher-kuttl-api-config-data -o jsonpath=''{.data.00-default\.conf}''|base64 -d|grep -c ''cafile = /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem'')" == 2 ]' - ' # If we are running the container locally, skip following test' - ' if [ "$(oc get pods -n openstack-operators -o name -l openstack.org/operator-name=watcher)" == "" ]; then' - ' exit 0' - ' fi' - ' env_variables=$(oc set env $(oc get pods -n openstack-operators -o name -l openstack.org/operator-name=watcher) -n openstack-operators --list)' - ' counter=0' - ' for i in ${env_variables}; do' - ' if echo ${i} | grep ''_URL_DEFAULT'' &> /dev/null; then' - ' echo ${i}' - ' counter=$((counter + 1))' - ' fi' - ' done' - ' if [ ${counter} -lt 3 ]; then' - ' echo "Error: Less than 3 _URL_DEFAULT variables found."' - ' exit 1' - ' else' - ' echo "Success: ${counter} _URL_DEFAULT variables found."' - ' fi' - ' ]' - ' logger.go:42: 07:11:47 | watcher/1-deploy-with-defaults | + oc exec -n watcher-kuttl-default openstackclient -- openstack service list -f value -c Name -c Type' - ' logger.go:42: 07:11:47 | watcher/1-deploy-with-defaults | ++ grep -c ''^watcher''' - ' logger.go:42: 07:11:49 | watcher/1-deploy-with-defaults | + ''['' 0 == 1 '']''' - ' logger.go:42: 07:11:50 | watcher/1-deploy-with-defaults | running command: [sh -c set -euxo pipefail' - ' oc exec -n watcher-kuttl-default openstackclient -- openstack service list -f value -c Name -c Type |[ $(grep -c ^watcher) == 1 ]' - ' SERVICEID=$(oc exec -n watcher-kuttl-default openstackclient -- openstack service list -f value -c Name -c Type -c ID | grep watcher| awk ''{print $1}'')' - ' [ $(oc get -n watcher-kuttl-default keystoneservice watcher -o jsonpath={.status.serviceID}) == ${SERVICEID} ]' - ' [ -n "$(oc get -n watcher-kuttl-default watcher watcher-kuttl -o jsonpath={.status.hash.dbsync})" ]' - ' [ "$(oc get -n watcher-kuttl-default secret watcher-kuttl-api-config-data -o jsonpath=''{.data.my\.cnf}''|base64 -d|grep -c ''ssl=1'')" == 1 ]' - ' [ "$(oc get -n watcher-kuttl-default secret watcher-kuttl-api-config-data -o jsonpath=''{.data.00-default\.conf}''|base64 -d|grep -c ''cafile = /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem'')" == 2 ]' - ' # If we are running the container locally, skip following test' - ' if [ "$(oc get pods -n openstack-operators -o name -l openstack.org/operator-name=watcher)" == "" ]; then' - ' exit 0' - ' fi' - ' env_variables=$(oc set env $(oc get pods -n openstack-operators -o name -l openstack.org/operator-name=watcher) -n openstack-operators --list)' - ' counter=0' - ' for i in ${env_variables}; do' - ' if echo ${i} | grep ''_URL_DEFAULT'' &> /dev/null; then' - ' echo ${i}' - ' counter=$((counter + 1))' - ' fi' - ' done' - ' if [ ${counter} -lt 3 ]; then' - ' echo "Error: Less than 3 _URL_DEFAULT variables found."' - ' exit 1' - ' else' - ' echo "Success: ${counter} _URL_DEFAULT variables found."' - ' fi' - ' ]' - ' logger.go:42: 07:11:50 | watcher/1-deploy-with-defaults | + oc exec -n watcher-kuttl-default openstackclient -- openstack service list -f value -c Name -c Type' - ' logger.go:42: 07:11:50 | watcher/1-deploy-with-defaults | ++ grep -c ''^watcher''' - ' logger.go:42: 07:11:53 | watcher/1-deploy-with-defaults | + ''['' 1 == 1 '']''' - ' logger.go:42: 07:11:53 | watcher/1-deploy-with-defaults | ++ oc exec -n watcher-kuttl-default openstackclient -- openstack service list -f value -c Name -c Type -c ID' - ' logger.go:42: 07:11:53 | watcher/1-deploy-with-defaults | ++ awk ''{print $1}''' - ' logger.go:42: 07:11:53 | watcher/1-deploy-with-defaults | ++ grep watcher' - ' logger.go:42: 07:11:55 | watcher/1-deploy-with-defaults | + SERVICEID=f748c9f4bbbd40ff95d6ecdd7fa3537e' - ' logger.go:42: 07:11:55 | watcher/1-deploy-with-defaults | ++ oc get -n watcher-kuttl-default keystoneservice watcher -o ''jsonpath={.status.serviceID}''' - ' logger.go:42: 07:11:55 | watcher/1-deploy-with-defaults | + ''['' f748c9f4bbbd40ff95d6ecdd7fa3537e == f748c9f4bbbd40ff95d6ecdd7fa3537e '']''' - ' logger.go:42: 07:11:55 | watcher/1-deploy-with-defaults | ++ oc get -n watcher-kuttl-default watcher watcher-kuttl -o ''jsonpath={.status.hash.dbsync}''' - ' logger.go:42: 07:11:56 | watcher/1-deploy-with-defaults | + ''['' -n nbbh56bh4h647h5fbh67dh699h5bdh85hb7h65fh676h555h9bh5c4hf9h684h55dh674h564h565h6fh578hd9h56bh5fchcch687h68fh568h688h69q '']''' - ' logger.go:42: 07:11:56 | watcher/1-deploy-with-defaults | ++ grep -c ssl=1' - ' logger.go:42: 07:11:56 | watcher/1-deploy-with-defaults | ++ base64 -d' - ' logger.go:42: 07:11:56 | watcher/1-deploy-with-defaults | ++ oc get -n watcher-kuttl-default secret watcher-kuttl-api-config-data -o ''jsonpath={.data.my\.cnf}''' - ' logger.go:42: 07:11:56 | watcher/1-deploy-with-defaults | + ''['' 1 == 1 '']''' - ' logger.go:42: 07:11:56 | watcher/1-deploy-with-defaults | ++ grep -c ''cafile = /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem''' - ' logger.go:42: 07:11:56 | watcher/1-deploy-with-defaults | ++ oc get -n watcher-kuttl-default secret watcher-kuttl-api-config-data -o ''jsonpath={.data.00-default\.conf}''' - ' logger.go:42: 07:11:56 | watcher/1-deploy-with-defaults | ++ base64 -d' - ' logger.go:42: 07:11:56 | watcher/1-deploy-with-defaults | + ''['' 2 == 2 '']''' - ' logger.go:42: 07:11:56 | watcher/1-deploy-with-defaults | ++ oc get pods -n openstack-operators -o name -l openstack.org/operator-name=watcher' - ' logger.go:42: 07:11:56 | watcher/1-deploy-with-defaults | + ''['' '''' == '''' '']''' - ' logger.go:42: 07:11:56 | watcher/1-deploy-with-defaults | + exit 0' - ' logger.go:42: 07:11:57 | watcher/1-deploy-with-defaults | running command: [sh -c set -euxo pipefail' - ' oc exec -n watcher-kuttl-default openstackclient -- openstack service list -f value -c Name -c Type |[ $(grep -c ^watcher) == 1 ]' - ' SERVICEID=$(oc exec -n watcher-kuttl-default openstackclient -- openstack service list -f value -c Name -c Type -c ID | grep watcher| awk ''{print $1}'')' - ' [ $(oc get -n watcher-kuttl-default keystoneservice watcher -o jsonpath={.status.serviceID}) == ${SERVICEID} ]' - ' [ -n "$(oc get -n watcher-kuttl-default watcher watcher-kuttl -o jsonpath={.status.hash.dbsync})" ]' - ' [ "$(oc get -n watcher-kuttl-default secret watcher-kuttl-api-config-data -o jsonpath=''{.data.my\.cnf}''|base64 -d|grep -c ''ssl=1'')" == 1 ]' - ' [ "$(oc get -n watcher-kuttl-default secret watcher-kuttl-api-config-data -o jsonpath=''{.data.00-default\.conf}''|base64 -d|grep -c ''cafile = /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem'')" == 2 ]' - ' # If we are running the container locally, skip following test' - ' if [ "$(oc get pods -n openstack-operators -o name -l openstack.org/operator-name=watcher)" == "" ]; then' - ' exit 0' - ' fi' - ' env_variables=$(oc set env $(oc get pods -n openstack-operators -o name -l openstack.org/operator-name=watcher) -n openstack-operators --list)' - ' counter=0' - ' for i in ${env_variables}; do' - ' if echo ${i} | grep ''_URL_DEFAULT'' &> /dev/null; then' - ' echo ${i}' - ' counter=$((counter + 1))' - ' fi' - ' done' - ' if [ ${counter} -lt 3 ]; then' - ' echo "Error: Less than 3 _URL_DEFAULT variables found."' - ' exit 1' - ' else' - ' echo "Success: ${counter} _URL_DEFAULT variables found."' - ' fi' - ' ]' - ' logger.go:42: 07:11:57 | watcher/1-deploy-with-defaults | + oc exec -n watcher-kuttl-default openstackclient -- openstack service list -f value -c Name -c Type' - ' logger.go:42: 07:11:57 | watcher/1-deploy-with-defaults | ++ grep -c ''^watcher''' - ' logger.go:42: 07:12:00 | watcher/1-deploy-with-defaults | + ''['' 1 == 1 '']''' - ' logger.go:42: 07:12:00 | watcher/1-deploy-with-defaults | ++ oc exec -n watcher-kuttl-default openstackclient -- openstack service list -f value -c Name -c Type -c ID' - ' logger.go:42: 07:12:00 | watcher/1-deploy-with-defaults | ++ grep watcher' - ' logger.go:42: 07:12:00 | watcher/1-deploy-with-defaults | ++ awk ''{print $1}''' - ' logger.go:42: 07:12:02 | watcher/1-deploy-with-defaults | + SERVICEID=f748c9f4bbbd40ff95d6ecdd7fa3537e' - ' logger.go:42: 07:12:02 | watcher/1-deploy-with-defaults | ++ oc get -n watcher-kuttl-default keystoneservice watcher -o ''jsonpath={.status.serviceID}''' - ' logger.go:42: 07:12:02 | watcher/1-deploy-with-defaults | + ''['' f748c9f4bbbd40ff95d6ecdd7fa3537e == f748c9f4bbbd40ff95d6ecdd7fa3537e '']''' - ' logger.go:42: 07:12:02 | watcher/1-deploy-with-defaults | ++ oc get -n watcher-kuttl-default watcher watcher-kuttl -o ''jsonpath={.status.hash.dbsync}''' - ' logger.go:42: 07:12:02 | watcher/1-deploy-with-defaults | + ''['' -n nbbh56bh4h647h5fbh67dh699h5bdh85hb7h65fh676h555h9bh5c4hf9h684h55dh674h564h565h6fh578hd9h56bh5fchcch687h68fh568h688h69q '']''' - ' logger.go:42: 07:12:02 | watcher/1-deploy-with-defaults | ++ base64 -d' - ' logger.go:42: 07:12:02 | watcher/1-deploy-with-defaults | ++ oc get -n watcher-kuttl-default secret watcher-kuttl-api-config-data -o ''jsonpath={.data.my\.cnf}''' - ' logger.go:42: 07:12:02 | watcher/1-deploy-with-defaults | ++ grep -c ssl=1' - ' logger.go:42: 07:12:02 | watcher/1-deploy-with-defaults | + ''['' 1 == 1 '']''' - ' logger.go:42: 07:12:02 | watcher/1-deploy-with-defaults | ++ oc get -n watcher-kuttl-default secret watcher-kuttl-api-config-data -o ''jsonpath={.data.00-default\.conf}''' - ' logger.go:42: 07:12:02 | watcher/1-deploy-with-defaults | ++ grep -c ''cafile = /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem''' - ' logger.go:42: 07:12:02 | watcher/1-deploy-with-defaults | ++ base64 -d' - ' logger.go:42: 07:12:02 | watcher/1-deploy-with-defaults | + ''['' 2 == 2 '']''' - ' logger.go:42: 07:12:02 | watcher/1-deploy-with-defaults | ++ oc get pods -n openstack-operators -o name -l openstack.org/operator-name=watcher' - ' logger.go:42: 07:12:03 | watcher/1-deploy-with-defaults | + ''['' '''' == '''' '']''' - ' logger.go:42: 07:12:03 | watcher/1-deploy-with-defaults | + exit 0' - ' logger.go:42: 07:12:04 | watcher/1-deploy-with-defaults | running command: [sh -c set -euxo pipefail' - ' oc exec -n watcher-kuttl-default openstackclient -- openstack service list -f value -c Name -c Type |[ $(grep -c ^watcher) == 1 ]' - ' SERVICEID=$(oc exec -n watcher-kuttl-default openstackclient -- openstack service list -f value -c Name -c Type -c ID | grep watcher| awk ''{print $1}'')' - ' [ $(oc get -n watcher-kuttl-default keystoneservice watcher -o jsonpath={.status.serviceID}) == ${SERVICEID} ]' - ' [ -n "$(oc get -n watcher-kuttl-default watcher watcher-kuttl -o jsonpath={.status.hash.dbsync})" ]' - ' [ "$(oc get -n watcher-kuttl-default secret watcher-kuttl-api-config-data -o jsonpath=''{.data.my\.cnf}''|base64 -d|grep -c ''ssl=1'')" == 1 ]' - ' [ "$(oc get -n watcher-kuttl-default secret watcher-kuttl-api-config-data -o jsonpath=''{.data.00-default\.conf}''|base64 -d|grep -c ''cafile = /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem'')" == 2 ]' - ' # If we are running the container locally, skip following test' - ' if [ "$(oc get pods -n openstack-operators -o name -l openstack.org/operator-name=watcher)" == "" ]; then' - ' exit 0' - ' fi' - ' env_variables=$(oc set env $(oc get pods -n openstack-operators -o name -l openstack.org/operator-name=watcher) -n openstack-operators --list)' - ' counter=0' - ' for i in ${env_variables}; do' - ' if echo ${i} | grep ''_URL_DEFAULT'' &> /dev/null; then' - ' echo ${i}' - ' counter=$((counter + 1))' - ' fi' - ' done' - ' if [ ${counter} -lt 3 ]; then' - ' echo "Error: Less than 3 _URL_DEFAULT variables found."' - ' exit 1' - ' else' - ' echo "Success: ${counter} _URL_DEFAULT variables found."' - ' fi' - ' ]' - ' logger.go:42: 07:12:04 | watcher/1-deploy-with-defaults | + oc exec -n watcher-kuttl-default openstackclient -- openstack service list -f value -c Name -c Type' - ' logger.go:42: 07:12:04 | watcher/1-deploy-with-defaults | ++ grep -c ''^watcher''' - ' logger.go:42: 07:12:06 | watcher/1-deploy-with-defaults | + ''['' 1 == 1 '']''' - ' logger.go:42: 07:12:06 | watcher/1-deploy-with-defaults | ++ oc exec -n watcher-kuttl-default openstackclient -- openstack service list -f value -c Name -c Type -c ID' - ' logger.go:42: 07:12:06 | watcher/1-deploy-with-defaults | ++ awk ''{print $1}''' - ' logger.go:42: 07:12:06 | watcher/1-deploy-with-defaults | ++ grep watcher' - ' logger.go:42: 07:12:09 | watcher/1-deploy-with-defaults | + SERVICEID=f748c9f4bbbd40ff95d6ecdd7fa3537e' - ' logger.go:42: 07:12:09 | watcher/1-deploy-with-defaults | ++ oc get -n watcher-kuttl-default keystoneservice watcher -o ''jsonpath={.status.serviceID}''' - ' logger.go:42: 07:12:09 | watcher/1-deploy-with-defaults | + ''['' f748c9f4bbbd40ff95d6ecdd7fa3537e == f748c9f4bbbd40ff95d6ecdd7fa3537e '']''' - ' logger.go:42: 07:12:09 | watcher/1-deploy-with-defaults | ++ oc get -n watcher-kuttl-default watcher watcher-kuttl -o ''jsonpath={.status.hash.dbsync}''' - ' logger.go:42: 07:12:09 | watcher/1-deploy-with-defaults | + ''['' -n nbbh56bh4h647h5fbh67dh699h5bdh85hb7h65fh676h555h9bh5c4hf9h684h55dh674h564h565h6fh578hd9h56bh5fchcch687h68fh568h688h69q '']''' - ' logger.go:42: 07:12:09 | watcher/1-deploy-with-defaults | ++ oc get -n watcher-kuttl-default secret watcher-kuttl-api-config-data -o ''jsonpath={.data.my\.cnf}''' - ' logger.go:42: 07:12:09 | watcher/1-deploy-with-defaults | ++ grep -c ssl=1' - ' logger.go:42: 07:12:09 | watcher/1-deploy-with-defaults | ++ base64 -d' - ' logger.go:42: 07:12:10 | watcher/1-deploy-with-defaults | + ''['' 1 == 1 '']''' - ' logger.go:42: 07:12:10 | watcher/1-deploy-with-defaults | ++ base64 -d' - ' logger.go:42: 07:12:10 | watcher/1-deploy-with-defaults | ++ grep -c ''cafile = /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem''' - ' logger.go:42: 07:12:10 | watcher/1-deploy-with-defaults | ++ oc get -n watcher-kuttl-default secret watcher-kuttl-api-config-data -o ''jsonpath={.data.00-default\.conf}''' - ' logger.go:42: 07:12:10 | watcher/1-deploy-with-defaults | + ''['' 2 == 2 '']''' - ' logger.go:42: 07:12:10 | watcher/1-deploy-with-defaults | ++ oc get pods -n openstack-operators -o name -l openstack.org/operator-name=watcher' - ' logger.go:42: 07:12:10 | watcher/1-deploy-with-defaults | + ''['' '''' == '''' '']''' - ' logger.go:42: 07:12:10 | watcher/1-deploy-with-defaults | + exit 0' - ' logger.go:42: 07:12:11 | watcher/1-deploy-with-defaults | running command: [sh -c set -euxo pipefail' - ' oc exec -n watcher-kuttl-default openstackclient -- openstack service list -f value -c Name -c Type |[ $(grep -c ^watcher) == 1 ]' - ' SERVICEID=$(oc exec -n watcher-kuttl-default openstackclient -- openstack service list -f value -c Name -c Type -c ID | grep watcher| awk ''{print $1}'')' - ' [ $(oc get -n watcher-kuttl-default keystoneservice watcher -o jsonpath={.status.serviceID}) == ${SERVICEID} ]' - ' [ -n "$(oc get -n watcher-kuttl-default watcher watcher-kuttl -o jsonpath={.status.hash.dbsync})" ]' - ' [ "$(oc get -n watcher-kuttl-default secret watcher-kuttl-api-config-data -o jsonpath=''{.data.my\.cnf}''|base64 -d|grep -c ''ssl=1'')" == 1 ]' - ' [ "$(oc get -n watcher-kuttl-default secret watcher-kuttl-api-config-data -o jsonpath=''{.data.00-default\.conf}''|base64 -d|grep -c ''cafile = /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem'')" == 2 ]' - ' # If we are running the container locally, skip following test' - ' if [ "$(oc get pods -n openstack-operators -o name -l openstack.org/operator-name=watcher)" == "" ]; then' - ' exit 0' - ' fi' - ' env_variables=$(oc set env $(oc get pods -n openstack-operators -o name -l openstack.org/operator-name=watcher) -n openstack-operators --list)' - ' counter=0' - ' for i in ${env_variables}; do' - ' if echo ${i} | grep ''_URL_DEFAULT'' &> /dev/null; then' - ' echo ${i}' - ' counter=$((counter + 1))' - ' fi' - ' done' - ' if [ ${counter} -lt 3 ]; then' - ' echo "Error: Less than 3 _URL_DEFAULT variables found."' - ' exit 1' - ' else' - ' echo "Success: ${counter} _URL_DEFAULT variables found."' - ' fi' - ' ]' - ' logger.go:42: 07:12:11 | watcher/1-deploy-with-defaults | + oc exec -n watcher-kuttl-default openstackclient -- openstack service list -f value -c Name -c Type' - ' logger.go:42: 07:12:11 | watcher/1-deploy-with-defaults | ++ grep -c ''^watcher''' - ' logger.go:42: 07:12:13 | watcher/1-deploy-with-defaults | + ''['' 1 == 1 '']''' - ' logger.go:42: 07:12:13 | watcher/1-deploy-with-defaults | ++ oc exec -n watcher-kuttl-default openstackclient -- openstack service list -f value -c Name -c Type -c ID' - ' logger.go:42: 07:12:13 | watcher/1-deploy-with-defaults | ++ grep watcher' - ' logger.go:42: 07:12:13 | watcher/1-deploy-with-defaults | ++ awk ''{print $1}''' - ' logger.go:42: 07:12:16 | watcher/1-deploy-with-defaults | + SERVICEID=f748c9f4bbbd40ff95d6ecdd7fa3537e' - ' logger.go:42: 07:12:16 | watcher/1-deploy-with-defaults | ++ oc get -n watcher-kuttl-default keystoneservice watcher -o ''jsonpath={.status.serviceID}''' - ' logger.go:42: 07:12:16 | watcher/1-deploy-with-defaults | + ''['' f748c9f4bbbd40ff95d6ecdd7fa3537e == f748c9f4bbbd40ff95d6ecdd7fa3537e '']''' - ' logger.go:42: 07:12:16 | watcher/1-deploy-with-defaults | ++ oc get -n watcher-kuttl-default watcher watcher-kuttl -o ''jsonpath={.status.hash.dbsync}''' - ' logger.go:42: 07:12:16 | watcher/1-deploy-with-defaults | + ''['' -n nbbh56bh4h647h5fbh67dh699h5bdh85hb7h65fh676h555h9bh5c4hf9h684h55dh674h564h565h6fh578hd9h56bh5fchcch687h68fh568h688h69q '']''' - ' logger.go:42: 07:12:16 | watcher/1-deploy-with-defaults | ++ base64 -d' - ' logger.go:42: 07:12:16 | watcher/1-deploy-with-defaults | ++ grep -c ssl=1' - ' logger.go:42: 07:12:16 | watcher/1-deploy-with-defaults | ++ oc get -n watcher-kuttl-default secret watcher-kuttl-api-config-data -o ''jsonpath={.data.my\.cnf}''' - ' logger.go:42: 07:12:16 | watcher/1-deploy-with-defaults | + ''['' 1 == 1 '']''' - ' logger.go:42: 07:12:16 | watcher/1-deploy-with-defaults | ++ oc get -n watcher-kuttl-default secret watcher-kuttl-api-config-data -o ''jsonpath={.data.00-default\.conf}''' - ' logger.go:42: 07:12:16 | watcher/1-deploy-with-defaults | ++ base64 -d' - ' logger.go:42: 07:12:16 | watcher/1-deploy-with-defaults | ++ grep -c ''cafile = /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem''' - ' logger.go:42: 07:12:16 | watcher/1-deploy-with-defaults | + ''['' 2 == 2 '']''' - ' logger.go:42: 07:12:16 | watcher/1-deploy-with-defaults | ++ oc get pods -n openstack-operators -o name -l openstack.org/operator-name=watcher' - ' logger.go:42: 07:12:16 | watcher/1-deploy-with-defaults | + ''['' '''' == '''' '']''' - ' logger.go:42: 07:12:16 | watcher/1-deploy-with-defaults | + exit 0' - ' logger.go:42: 07:12:16 | watcher/1-deploy-with-defaults | test step completed 1-deploy-with-defaults' - ' logger.go:42: 07:12:16 | watcher/2-cleanup-watcher | starting test step 2-cleanup-watcher' - ' logger.go:42: 07:12:16 | watcher/2-cleanup-watcher | test step completed 2-cleanup-watcher' - ' logger.go:42: 07:12:16 | watcher/3-precreate-mariadbaccount | starting test step 3-precreate-mariadbaccount' - ' logger.go:42: 07:12:16 | watcher/3-precreate-mariadbaccount | MariaDBAccount:watcher-kuttl-default/watcher-precreated created' - ' logger.go:42: 07:12:16 | watcher/3-precreate-mariadbaccount | test step completed 3-precreate-mariadbaccount' - ' logger.go:42: 07:12:16 | watcher/4-deploy-with-precreated-account | starting test step 4-deploy-with-precreated-account' - ' logger.go:42: 07:12:16 | watcher/4-deploy-with-precreated-account | Secret:wa**********ig created' - ' logger.go:42: 07:12:17 | watcher/4-deploy-with-precreated-account | Watcher:watcher-kuttl-default/watcher-kuttl created' - ' logger.go:42: 07:12:17 | watcher/4-deploy-with-precreated-account | running command: [sh -c set -euxo pipefail' - ' oc project watcher-kuttl-default' - ' APIPOD=$(oc get pods -n watcher-kuttl-default -l "service=watcher-api" -ocustom-columns=:metadata.name|grep -v ^$|head -1)' - ' if [ -n "${APIPOD}" ]; then' - ' [ $(echo $(oc rsh -c watcher-api ${APIPOD} cat /etc/watcher/watcher.conf.d/01-global-custom.conf) |grep -c "^# Global config") == 1 ]' - ' [ $(echo $(oc rsh -c watcher-api ${APIPOD} cat /etc/watcher/watcher.conf.d/02-service-custom.conf) |grep -c "^# Service config") == 1 ]' - ' [ $(echo $(oc rsh -c watcher-api ${APIPOD} cat /etc/watcher/watcher.conf.d/00-default.conf) |grep -czPo ''\[prometheus_client\]\s+host\s+=\s+metric-storage-prometheus.watcher-kuttl-default.svc\s+port\s+=\s+9090\s+cafile\s+=\s+/etc/pki/ca-trust/extracted/pem/prometheus/internal-ca-bundle.pem'') == 1 ]' - ' [ $(echo $(oc rsh -c watcher-api ${APIPOD} cat /etc/httpd/conf.d/10-watcher-wsgi-main.conf) |grep -czPo ''TimeOut 80'') == 1 ]' - ' else' - ' exit 1' - ' fi' - ' ]' - ' logger.go:42: 07:12:17 | watcher/4-deploy-with-precreated-account | + oc project watcher-kuttl-default' - ' logger.go:42: 07:12:17 | watcher/4-deploy-with-precreated-account | Already on project "watcher-kuttl-default" on server "https://api.crc.testing:6443".' - ' logger.go:42: 07:12:17 | watcher/4-deploy-with-precreated-account | ++ oc get pods -n watcher-kuttl-default -l service=watcher-api -ocustom-columns=:metadata.name' - ' logger.go:42: 07:12:17 | watcher/4-deploy-with-precreated-account | ++ head -1' - ' logger.go:42: 07:12:17 | watcher/4-deploy-with-precreated-account | ++ grep -v ''^$''' - ' logger.go:42: 07:12:17 | watcher/4-deploy-with-precreated-account | + APIPOD=watcher-kuttl-api-0' - ' logger.go:42: 07:12:17 | watcher/4-deploy-with-precreated-account | + ''['' -n watcher-kuttl-api-0 '']''' - ' logger.go:42: 07:12:17 | watcher/4-deploy-with-precreated-account | ++ grep -c ''^# Global config''' - ' logger.go:42: 07:12:17 | watcher/4-deploy-with-precreated-account | +++ oc rsh -c watcher-api watcher-kuttl-api-0 cat /etc/watcher/watcher.conf.d/01-global-custom.conf' - ' logger.go:42: 07:12:19 | watcher/4-deploy-with-precreated-account | error: Internal error occurred: error executing command in container: container is not created or running' - ' logger.go:42: 07:12:19 | watcher/4-deploy-with-precreated-account | ++ echo' - ' logger.go:42: 07:12:19 | watcher/4-deploy-with-precreated-account | + ''['' 0 == 1 '']''' - ' logger.go:42: 07:12:20 | watcher/4-deploy-with-precreated-account | running command: [sh -c set -euxo pipefail' - ' oc project watcher-kuttl-default' - ' APIPOD=$(oc get pods -n watcher-kuttl-default -l "service=watcher-api" -ocustom-columns=:metadata.name|grep -v ^$|head -1)' - ' if [ -n "${APIPOD}" ]; then' - ' [ $(echo $(oc rsh -c watcher-api ${APIPOD} cat /etc/watcher/watcher.conf.d/01-global-custom.conf) |grep -c "^# Global config") == 1 ]' - ' [ $(echo $(oc rsh -c watcher-api ${APIPOD} cat /etc/watcher/watcher.conf.d/02-service-custom.conf) |grep -c "^# Service config") == 1 ]' - ' [ $(echo $(oc rsh -c watcher-api ${APIPOD} cat /etc/watcher/watcher.conf.d/00-default.conf) |grep -czPo ''\[prometheus_client\]\s+host\s+=\s+metric-storage-prometheus.watcher-kuttl-default.svc\s+port\s+=\s+9090\s+cafile\s+=\s+/etc/pki/ca-trust/extracted/pem/prometheus/internal-ca-bundle.pem'') == 1 ]' - ' [ $(echo $(oc rsh -c watcher-api ${APIPOD} cat /etc/httpd/conf.d/10-watcher-wsgi-main.conf) |grep -czPo ''TimeOut 80'') == 1 ]' - ' else' - ' exit 1' - ' fi' - ' ]' - ' logger.go:42: 07:12:20 | watcher/4-deploy-with-precreated-account | + oc project watcher-kuttl-default' - ' logger.go:42: 07:12:20 | watcher/4-deploy-with-precreated-account | Already on project "watcher-kuttl-default" on server "https://api.crc.testing:6443".' - ' logger.go:42: 07:12:20 | watcher/4-deploy-with-precreated-account | ++ oc get pods -n watcher-kuttl-default -l service=watcher-api -ocustom-columns=:metadata.name' - ' logger.go:42: 07:12:20 | watcher/4-deploy-with-precreated-account | ++ head -1' - ' logger.go:42: 07:12:20 | watcher/4-deploy-with-precreated-account | ++ grep -v ''^$''' - ' logger.go:42: 07:12:20 | watcher/4-deploy-with-precreated-account | + APIPOD=' - ' logger.go:42: 07:12:21 | watcher/4-deploy-with-precreated-account | running command: [sh -c set -euxo pipefail' - ' oc project watcher-kuttl-default' - ' APIPOD=$(oc get pods -n watcher-kuttl-default -l "service=watcher-api" -ocustom-columns=:metadata.name|grep -v ^$|head -1)' - ' if [ -n "${APIPOD}" ]; then' - ' [ $(echo $(oc rsh -c watcher-api ${APIPOD} cat /etc/watcher/watcher.conf.d/01-global-custom.conf) |grep -c "^# Global config") == 1 ]' - ' [ $(echo $(oc rsh -c watcher-api ${APIPOD} cat /etc/watcher/watcher.conf.d/02-service-custom.conf) |grep -c "^# Service config") == 1 ]' - ' [ $(echo $(oc rsh -c watcher-api ${APIPOD} cat /etc/watcher/watcher.conf.d/00-default.conf) |grep -czPo ''\[prometheus_client\]\s+host\s+=\s+metric-storage-prometheus.watcher-kuttl-default.svc\s+port\s+=\s+9090\s+cafile\s+=\s+/etc/pki/ca-trust/extracted/pem/prometheus/internal-ca-bundle.pem'') == 1 ]' - ' [ $(echo $(oc rsh -c watcher-api ${APIPOD} cat /etc/httpd/conf.d/10-watcher-wsgi-main.conf) |grep -czPo ''TimeOut 80'') == 1 ]' - ' else' - ' exit 1' - ' fi' - ' ]' - ' logger.go:42: 07:12:21 | watcher/4-deploy-with-precreated-account | + oc project watcher-kuttl-default' - ' logger.go:42: 07:12:21 | watcher/4-deploy-with-precreated-account | Already on project "watcher-kuttl-default" on server "https://api.crc.testing:6443".' - ' logger.go:42: 07:12:21 | watcher/4-deploy-with-precreated-account | ++ oc get pods -n watcher-kuttl-default -l service=watcher-api -ocustom-columns=:metadata.name' - ' logger.go:42: 07:12:21 | watcher/4-deploy-with-precreated-account | ++ head -1' - ' logger.go:42: 07:12:21 | watcher/4-deploy-with-precreated-account | ++ grep -v ''^$''' - ' logger.go:42: 07:12:22 | watcher/4-deploy-with-precreated-account | + APIPOD=' - ' logger.go:42: 07:12:23 | watcher/4-deploy-with-precreated-account | running command: [sh -c set -euxo pipefail' - ' oc project watcher-kuttl-default' - ' APIPOD=$(oc get pods -n watcher-kuttl-default -l "service=watcher-api" -ocustom-columns=:metadata.name|grep -v ^$|head -1)' - ' if [ -n "${APIPOD}" ]; then' - ' [ $(echo $(oc rsh -c watcher-api ${APIPOD} cat /etc/watcher/watcher.conf.d/01-global-custom.conf) |grep -c "^# Global config") == 1 ]' - ' [ $(echo $(oc rsh -c watcher-api ${APIPOD} cat /etc/watcher/watcher.conf.d/02-service-custom.conf) |grep -c "^# Service config") == 1 ]' - ' [ $(echo $(oc rsh -c watcher-api ${APIPOD} cat /etc/watcher/watcher.conf.d/00-default.conf) |grep -czPo ''\[prometheus_client\]\s+host\s+=\s+metric-storage-prometheus.watcher-kuttl-default.svc\s+port\s+=\s+9090\s+cafile\s+=\s+/etc/pki/ca-trust/extracted/pem/prometheus/internal-ca-bundle.pem'') == 1 ]' - ' [ $(echo $(oc rsh -c watcher-api ${APIPOD} cat /etc/httpd/conf.d/10-watcher-wsgi-main.conf) |grep -czPo ''TimeOut 80'') == 1 ]' - ' else' - ' exit 1' - ' fi' - ' ]' - ' logger.go:42: 07:12:23 | watcher/4-deploy-with-precreated-account | + oc project watcher-kuttl-default' - ' logger.go:42: 07:12:23 | watcher/4-deploy-with-precreated-account | Already on project "watcher-kuttl-default" on server "https://api.crc.testing:6443".' - ' logger.go:42: 07:12:23 | watcher/4-deploy-with-precreated-account | ++ oc get pods -n watcher-kuttl-default -l service=watcher-api -ocustom-columns=:metadata.name' - ' logger.go:42: 07:12:23 | watcher/4-deploy-with-precreated-account | ++ head -1' - ' logger.go:42: 07:12:23 | watcher/4-deploy-with-precreated-account | ++ grep -v ''^$''' - ' logger.go:42: 07:12:23 | watcher/4-deploy-with-precreated-account | + APIPOD=' - ' logger.go:42: 07:12:24 | watcher/4-deploy-with-precreated-account | running command: [sh -c set -euxo pipefail' - ' oc project watcher-kuttl-default' - ' APIPOD=$(oc get pods -n watcher-kuttl-default -l "service=watcher-api" -ocustom-columns=:metadata.name|grep -v ^$|head -1)' - ' if [ -n "${APIPOD}" ]; then' - ' [ $(echo $(oc rsh -c watcher-api ${APIPOD} cat /etc/watcher/watcher.conf.d/01-global-custom.conf) |grep -c "^# Global config") == 1 ]' - ' [ $(echo $(oc rsh -c watcher-api ${APIPOD} cat /etc/watcher/watcher.conf.d/02-service-custom.conf) |grep -c "^# Service config") == 1 ]' - ' [ $(echo $(oc rsh -c watcher-api ${APIPOD} cat /etc/watcher/watcher.conf.d/00-default.conf) |grep -czPo ''\[prometheus_client\]\s+host\s+=\s+metric-storage-prometheus.watcher-kuttl-default.svc\s+port\s+=\s+9090\s+cafile\s+=\s+/etc/pki/ca-trust/extracted/pem/prometheus/internal-ca-bundle.pem'') == 1 ]' - ' [ $(echo $(oc rsh -c watcher-api ${APIPOD} cat /etc/httpd/conf.d/10-watcher-wsgi-main.conf) |grep -czPo ''TimeOut 80'') == 1 ]' - ' else' - ' exit 1' - ' fi' - ' ]' - ' logger.go:42: 07:12:24 | watcher/4-deploy-with-precreated-account | + oc project watcher-kuttl-default' - ' logger.go:42: 07:12:24 | watcher/4-deploy-with-precreated-account | Already on project "watcher-kuttl-default" on server "https://api.crc.testing:6443".' - ' logger.go:42: 07:12:24 | watcher/4-deploy-with-precreated-account | ++ oc get pods -n watcher-kuttl-default -l service=watcher-api -ocustom-columns=:metadata.name' - ' logger.go:42: 07:12:24 | watcher/4-deploy-with-precreated-account | ++ head -1' - ' logger.go:42: 07:12:24 | watcher/4-deploy-with-precreated-account | ++ grep -v ''^$''' - ' logger.go:42: 07:12:24 | watcher/4-deploy-with-precreated-account | + APIPOD=' - ' logger.go:42: 07:12:25 | watcher/4-deploy-with-precreated-account | running command: [sh -c set -euxo pipefail' - ' oc project watcher-kuttl-default' - ' APIPOD=$(oc get pods -n watcher-kuttl-default -l "service=watcher-api" -ocustom-columns=:metadata.name|grep -v ^$|head -1)' - ' if [ -n "${APIPOD}" ]; then' - ' [ $(echo $(oc rsh -c watcher-api ${APIPOD} cat /etc/watcher/watcher.conf.d/01-global-custom.conf) |grep -c "^# Global config") == 1 ]' - ' [ $(echo $(oc rsh -c watcher-api ${APIPOD} cat /etc/watcher/watcher.conf.d/02-service-custom.conf) |grep -c "^# Service config") == 1 ]' - ' [ $(echo $(oc rsh -c watcher-api ${APIPOD} cat /etc/watcher/watcher.conf.d/00-default.conf) |grep -czPo ''\[prometheus_client\]\s+host\s+=\s+metric-storage-prometheus.watcher-kuttl-default.svc\s+port\s+=\s+9090\s+cafile\s+=\s+/etc/pki/ca-trust/extracted/pem/prometheus/internal-ca-bundle.pem'') == 1 ]' - ' [ $(echo $(oc rsh -c watcher-api ${APIPOD} cat /etc/httpd/conf.d/10-watcher-wsgi-main.conf) |grep -czPo ''TimeOut 80'') == 1 ]' - ' else' - ' exit 1' - ' fi' - ' ]' - ' logger.go:42: 07:12:25 | watcher/4-deploy-with-precreated-account | + oc project watcher-kuttl-default' - ' logger.go:42: 07:12:25 | watcher/4-deploy-with-precreated-account | Already on project "watcher-kuttl-default" on server "https://api.crc.testing:6443".' - ' logger.go:42: 07:12:25 | watcher/4-deploy-with-precreated-account | ++ oc get pods -n watcher-kuttl-default -l service=watcher-api -ocustom-columns=:metadata.name' - ' logger.go:42: 07:12:25 | watcher/4-deploy-with-precreated-account | ++ head -1' - ' logger.go:42: 07:12:25 | watcher/4-deploy-with-precreated-account | ++ grep -v ''^$''' - ' logger.go:42: 07:12:26 | watcher/4-deploy-with-precreated-account | + APIPOD=' - ' logger.go:42: 07:12:27 | watcher/4-deploy-with-precreated-account | running command: [sh -c set -euxo pipefail' - ' oc project watcher-kuttl-default' - ' APIPOD=$(oc get pods -n watcher-kuttl-default -l "service=watcher-api" -ocustom-columns=:metadata.name|grep -v ^$|head -1)' - ' if [ -n "${APIPOD}" ]; then' - ' [ $(echo $(oc rsh -c watcher-api ${APIPOD} cat /etc/watcher/watcher.conf.d/01-global-custom.conf) |grep -c "^# Global config") == 1 ]' - ' [ $(echo $(oc rsh -c watcher-api ${APIPOD} cat /etc/watcher/watcher.conf.d/02-service-custom.conf) |grep -c "^# Service config") == 1 ]' - ' [ $(echo $(oc rsh -c watcher-api ${APIPOD} cat /etc/watcher/watcher.conf.d/00-default.conf) |grep -czPo ''\[prometheus_client\]\s+host\s+=\s+metric-storage-prometheus.watcher-kuttl-default.svc\s+port\s+=\s+9090\s+cafile\s+=\s+/etc/pki/ca-trust/extracted/pem/prometheus/internal-ca-bundle.pem'') == 1 ]' - ' [ $(echo $(oc rsh -c watcher-api ${APIPOD} cat /etc/httpd/conf.d/10-watcher-wsgi-main.conf) |grep -czPo ''TimeOut 80'') == 1 ]' - ' else' - ' exit 1' - ' fi' - ' ]' - ' logger.go:42: 07:12:27 | watcher/4-deploy-with-precreated-account | + oc project watcher-kuttl-default' - ' logger.go:42: 07:12:27 | watcher/4-deploy-with-precreated-account | Already on project "watcher-kuttl-default" on server "https://api.crc.testing:6443".' - ' logger.go:42: 07:12:27 | watcher/4-deploy-with-precreated-account | ++ oc get pods -n watcher-kuttl-default -l service=watcher-api -ocustom-columns=:metadata.name' - ' logger.go:42: 07:12:27 | watcher/4-deploy-with-precreated-account | ++ grep -v ''^$''' - ' logger.go:42: 07:12:27 | watcher/4-deploy-with-precreated-account | ++ head -1' - ' logger.go:42: 07:12:27 | watcher/4-deploy-with-precreated-account | + APIPOD=' - ' logger.go:42: 07:12:28 | watcher/4-deploy-with-precreated-account | running command: [sh -c set -euxo pipefail' - ' oc project watcher-kuttl-default' - ' APIPOD=$(oc get pods -n watcher-kuttl-default -l "service=watcher-api" -ocustom-columns=:metadata.name|grep -v ^$|head -1)' - ' if [ -n "${APIPOD}" ]; then' - ' [ $(echo $(oc rsh -c watcher-api ${APIPOD} cat /etc/watcher/watcher.conf.d/01-global-custom.conf) |grep -c "^# Global config") == 1 ]' - ' [ $(echo $(oc rsh -c watcher-api ${APIPOD} cat /etc/watcher/watcher.conf.d/02-service-custom.conf) |grep -c "^# Service config") == 1 ]' - ' [ $(echo $(oc rsh -c watcher-api ${APIPOD} cat /etc/watcher/watcher.conf.d/00-default.conf) |grep -czPo ''\[prometheus_client\]\s+host\s+=\s+metric-storage-prometheus.watcher-kuttl-default.svc\s+port\s+=\s+9090\s+cafile\s+=\s+/etc/pki/ca-trust/extracted/pem/prometheus/internal-ca-bundle.pem'') == 1 ]' - ' [ $(echo $(oc rsh -c watcher-api ${APIPOD} cat /etc/httpd/conf.d/10-watcher-wsgi-main.conf) |grep -czPo ''TimeOut 80'') == 1 ]' - ' else' - ' exit 1' - ' fi' - ' ]' - ' logger.go:42: 07:12:28 | watcher/4-deploy-with-precreated-account | + oc project watcher-kuttl-default' - ' logger.go:42: 07:12:28 | watcher/4-deploy-with-precreated-account | Already on project "watcher-kuttl-default" on server "https://api.crc.testing:6443".' - ' logger.go:42: 07:12:28 | watcher/4-deploy-with-precreated-account | ++ oc get pods -n watcher-kuttl-default -l service=watcher-api -ocustom-columns=:metadata.name' - ' logger.go:42: 07:12:28 | watcher/4-deploy-with-precreated-account | ++ head -1' - ' logger.go:42: 07:12:28 | watcher/4-deploy-with-precreated-account | ++ grep -v ''^$''' - ' logger.go:42: 07:12:29 | watcher/4-deploy-with-precreated-account | + APIPOD=' - ' logger.go:42: 07:12:30 | watcher/4-deploy-with-precreated-account | running command: [sh -c set -euxo pipefail' - ' oc project watcher-kuttl-default' - ' APIPOD=$(oc get pods -n watcher-kuttl-default -l "service=watcher-api" -ocustom-columns=:metadata.name|grep -v ^$|head -1)' - ' if [ -n "${APIPOD}" ]; then' - ' [ $(echo $(oc rsh -c watcher-api ${APIPOD} cat /etc/watcher/watcher.conf.d/01-global-custom.conf) |grep -c "^# Global config") == 1 ]' - ' [ $(echo $(oc rsh -c watcher-api ${APIPOD} cat /etc/watcher/watcher.conf.d/02-service-custom.conf) |grep -c "^# Service config") == 1 ]' - ' [ $(echo $(oc rsh -c watcher-api ${APIPOD} cat /etc/watcher/watcher.conf.d/00-default.conf) |grep -czPo ''\[prometheus_client\]\s+host\s+=\s+metric-storage-prometheus.watcher-kuttl-default.svc\s+port\s+=\s+9090\s+cafile\s+=\s+/etc/pki/ca-trust/extracted/pem/prometheus/internal-ca-bundle.pem'') == 1 ]' - ' [ $(echo $(oc rsh -c watcher-api ${APIPOD} cat /etc/httpd/conf.d/10-watcher-wsgi-main.conf) |grep -czPo ''TimeOut 80'') == 1 ]' - ' else' - ' exit 1' - ' fi' - ' ]' - ' logger.go:42: 07:12:30 | watcher/4-deploy-with-precreated-account | + oc project watcher-kuttl-default' - ' logger.go:42: 07:12:30 | watcher/4-deploy-with-precreated-account | Already on project "watcher-kuttl-default" on server "https://api.crc.testing:6443".' - ' logger.go:42: 07:12:30 | watcher/4-deploy-with-precreated-account | ++ oc get pods -n watcher-kuttl-default -l service=watcher-api -ocustom-columns=:metadata.name' - ' logger.go:42: 07:12:30 | watcher/4-deploy-with-precreated-account | ++ head -1' - ' logger.go:42: 07:12:30 | watcher/4-deploy-with-precreated-account | ++ grep -v ''^$''' - ' logger.go:42: 07:12:30 | watcher/4-deploy-with-precreated-account | + APIPOD=' - ' logger.go:42: 07:12:31 | watcher/4-deploy-with-precreated-account | running command: [sh -c set -euxo pipefail' - ' oc project watcher-kuttl-default' - ' APIPOD=$(oc get pods -n watcher-kuttl-default -l "service=watcher-api" -ocustom-columns=:metadata.name|grep -v ^$|head -1)' - ' if [ -n "${APIPOD}" ]; then' - ' [ $(echo $(oc rsh -c watcher-api ${APIPOD} cat /etc/watcher/watcher.conf.d/01-global-custom.conf) |grep -c "^# Global config") == 1 ]' - ' [ $(echo $(oc rsh -c watcher-api ${APIPOD} cat /etc/watcher/watcher.conf.d/02-service-custom.conf) |grep -c "^# Service config") == 1 ]' - ' [ $(echo $(oc rsh -c watcher-api ${APIPOD} cat /etc/watcher/watcher.conf.d/00-default.conf) |grep -czPo ''\[prometheus_client\]\s+host\s+=\s+metric-storage-prometheus.watcher-kuttl-default.svc\s+port\s+=\s+9090\s+cafile\s+=\s+/etc/pki/ca-trust/extracted/pem/prometheus/internal-ca-bundle.pem'') == 1 ]' - ' [ $(echo $(oc rsh -c watcher-api ${APIPOD} cat /etc/httpd/conf.d/10-watcher-wsgi-main.conf) |grep -czPo ''TimeOut 80'') == 1 ]' - ' else' - ' exit 1' - ' fi' - ' ]' - ' logger.go:42: 07:12:31 | watcher/4-deploy-with-precreated-account | + oc project watcher-kuttl-default' - ' logger.go:42: 07:12:31 | watcher/4-deploy-with-precreated-account | Already on project "watcher-kuttl-default" on server "https://api.crc.testing:6443".' - ' logger.go:42: 07:12:31 | watcher/4-deploy-with-precreated-account | ++ oc get pods -n watcher-kuttl-default -l service=watcher-api -ocustom-columns=:metadata.name' - ' logger.go:42: 07:12:31 | watcher/4-deploy-with-precreated-account | ++ head -1' - ' logger.go:42: 07:12:31 | watcher/4-deploy-with-precreated-account | ++ grep -v ''^$''' - ' logger.go:42: 07:12:31 | watcher/4-deploy-with-precreated-account | + APIPOD=' - ' logger.go:42: 07:12:32 | watcher/4-deploy-with-precreated-account | running command: [sh -c set -euxo pipefail' - ' oc project watcher-kuttl-default' - ' APIPOD=$(oc get pods -n watcher-kuttl-default -l "service=watcher-api" -ocustom-columns=:metadata.name|grep -v ^$|head -1)' - ' if [ -n "${APIPOD}" ]; then' - ' [ $(echo $(oc rsh -c watcher-api ${APIPOD} cat /etc/watcher/watcher.conf.d/01-global-custom.conf) |grep -c "^# Global config") == 1 ]' - ' [ $(echo $(oc rsh -c watcher-api ${APIPOD} cat /etc/watcher/watcher.conf.d/02-service-custom.conf) |grep -c "^# Service config") == 1 ]' - ' [ $(echo $(oc rsh -c watcher-api ${APIPOD} cat /etc/watcher/watcher.conf.d/00-default.conf) |grep -czPo ''\[prometheus_client\]\s+host\s+=\s+metric-storage-prometheus.watcher-kuttl-default.svc\s+port\s+=\s+9090\s+cafile\s+=\s+/etc/pki/ca-trust/extracted/pem/prometheus/internal-ca-bundle.pem'') == 1 ]' - ' [ $(echo $(oc rsh -c watcher-api ${APIPOD} cat /etc/httpd/conf.d/10-watcher-wsgi-main.conf) |grep -czPo ''TimeOut 80'') == 1 ]' - ' else' - ' exit 1' - ' fi' - ' ]' - ' logger.go:42: 07:12:32 | watcher/4-deploy-with-precreated-account | + oc project watcher-kuttl-default' - ' logger.go:42: 07:12:33 | watcher/4-deploy-with-precreated-account | Already on project "watcher-kuttl-default" on server "https://api.crc.testing:6443".' - ' logger.go:42: 07:12:33 | watcher/4-deploy-with-precreated-account | ++ oc get pods -n watcher-kuttl-default -l service=watcher-api -ocustom-columns=:metadata.name' - ' logger.go:42: 07:12:33 | watcher/4-deploy-with-precreated-account | ++ head -1' - ' logger.go:42: 07:12:33 | watcher/4-deploy-with-precreated-account | ++ grep -v ''^$''' - ' logger.go:42: 07:12:33 | watcher/4-deploy-with-precreated-account | + APIPOD=' - ' logger.go:42: 07:12:34 | watcher/4-deploy-with-precreated-account | running command: [sh -c set -euxo pipefail' - ' oc project watcher-kuttl-default' - ' APIPOD=$(oc get pods -n watcher-kuttl-default -l "service=watcher-api" -ocustom-columns=:metadata.name|grep -v ^$|head -1)' - ' if [ -n "${APIPOD}" ]; then' - ' [ $(echo $(oc rsh -c watcher-api ${APIPOD} cat /etc/watcher/watcher.conf.d/01-global-custom.conf) |grep -c "^# Global config") == 1 ]' - ' [ $(echo $(oc rsh -c watcher-api ${APIPOD} cat /etc/watcher/watcher.conf.d/02-service-custom.conf) |grep -c "^# Service config") == 1 ]' - ' [ $(echo $(oc rsh -c watcher-api ${APIPOD} cat /etc/watcher/watcher.conf.d/00-default.conf) |grep -czPo ''\[prometheus_client\]\s+host\s+=\s+metric-storage-prometheus.watcher-kuttl-default.svc\s+port\s+=\s+9090\s+cafile\s+=\s+/etc/pki/ca-trust/extracted/pem/prometheus/internal-ca-bundle.pem'') == 1 ]' - ' [ $(echo $(oc rsh -c watcher-api ${APIPOD} cat /etc/httpd/conf.d/10-watcher-wsgi-main.conf) |grep -czPo ''TimeOut 80'') == 1 ]' - ' else' - ' exit 1' - ' fi' - ' ]' - ' logger.go:42: 07:12:34 | watcher/4-deploy-with-precreated-account | + oc project watcher-kuttl-default' - ' logger.go:42: 07:12:34 | watcher/4-deploy-with-precreated-account | Already on project "watcher-kuttl-default" on server "https://api.crc.testing:6443".' - ' logger.go:42: 07:12:34 | watcher/4-deploy-with-precreated-account | ++ oc get pods -n watcher-kuttl-default -l service=watcher-api -ocustom-columns=:metadata.name' - ' logger.go:42: 07:12:34 | watcher/4-deploy-with-precreated-account | ++ head -1' - ' logger.go:42: 07:12:34 | watcher/4-deploy-with-precreated-account | ++ grep -v ''^$''' - ' logger.go:42: 07:12:34 | watcher/4-deploy-with-precreated-account | + APIPOD=watcher-kuttl-api-0' - ' logger.go:42: 07:12:34 | watcher/4-deploy-with-precreated-account | + ''['' -n watcher-kuttl-api-0 '']''' - ' logger.go:42: 07:12:34 | watcher/4-deploy-with-precreated-account | ++ grep -c ''^# Global config''' - ' logger.go:42: 07:12:34 | watcher/4-deploy-with-precreated-account | +++ oc rsh -c watcher-api watcher-kuttl-api-0 cat /etc/watcher/watcher.conf.d/01-global-custom.conf' - ' logger.go:42: 07:12:34 | watcher/4-deploy-with-precreated-account | error: unable to upgrade connection: container not found ("watcher-api")' - ' logger.go:42: 07:12:34 | watcher/4-deploy-with-precreated-account | ++ echo' - ' logger.go:42: 07:12:34 | watcher/4-deploy-with-precreated-account | + ''['' 0 == 1 '']''' - ' logger.go:42: 07:12:36 | watcher/4-deploy-with-precreated-account | running command: [sh -c set -euxo pipefail' - ' oc project watcher-kuttl-default' - ' APIPOD=$(oc get pods -n watcher-kuttl-default -l "service=watcher-api" -ocustom-columns=:metadata.name|grep -v ^$|head -1)' - ' if [ -n "${APIPOD}" ]; then' - ' [ $(echo $(oc rsh -c watcher-api ${APIPOD} cat /etc/watcher/watcher.conf.d/01-global-custom.conf) |grep -c "^# Global config") == 1 ]' - ' [ $(echo $(oc rsh -c watcher-api ${APIPOD} cat /etc/watcher/watcher.conf.d/02-service-custom.conf) |grep -c "^# Service config") == 1 ]' - ' [ $(echo $(oc rsh -c watcher-api ${APIPOD} cat /etc/watcher/watcher.conf.d/00-default.conf) |grep -czPo ''\[prometheus_client\]\s+host\s+=\s+metric-storage-prometheus.watcher-kuttl-default.svc\s+port\s+=\s+9090\s+cafile\s+=\s+/etc/pki/ca-trust/extracted/pem/prometheus/internal-ca-bundle.pem'') == 1 ]' - ' [ $(echo $(oc rsh -c watcher-api ${APIPOD} cat /etc/httpd/conf.d/10-watcher-wsgi-main.conf) |grep -czPo ''TimeOut 80'') == 1 ]' - ' else' - ' exit 1' - ' fi' - ' ]' - ' logger.go:42: 07:12:36 | watcher/4-deploy-with-precreated-account | + oc project watcher-kuttl-default' - ' logger.go:42: 07:12:36 | watcher/4-deploy-with-precreated-account | Already on project "watcher-kuttl-default" on server "https://api.crc.testing:6443".' - ' logger.go:42: 07:12:36 | watcher/4-deploy-with-precreated-account | ++ oc get pods -n watcher-kuttl-default -l service=watcher-api -ocustom-columns=:metadata.name' - ' logger.go:42: 07:12:36 | watcher/4-deploy-with-precreated-account | ++ grep -v ''^$''' - ' logger.go:42: 07:12:36 | watcher/4-deploy-with-precreated-account | ++ head -1' - ' logger.go:42: 07:12:36 | watcher/4-deploy-with-precreated-account | + APIPOD=watcher-kuttl-api-0' - ' logger.go:42: 07:12:36 | watcher/4-deploy-with-precreated-account | + ''['' -n watcher-kuttl-api-0 '']''' - ' logger.go:42: 07:12:36 | watcher/4-deploy-with-precreated-account | ++ grep -c ''^# Global config''' - ' logger.go:42: 07:12:36 | watcher/4-deploy-with-precreated-account | +++ oc rsh -c watcher-api watcher-kuttl-api-0 cat /etc/watcher/watcher.conf.d/01-global-custom.conf' - ' logger.go:42: 07:12:36 | watcher/4-deploy-with-precreated-account | ++ echo ''#'' Global config' - ' logger.go:42: 07:12:36 | watcher/4-deploy-with-precreated-account | + ''['' 1 == 1 '']''' - ' logger.go:42: 07:12:36 | watcher/4-deploy-with-precreated-account | ++ grep -c ''^# Service config''' - ' logger.go:42: 07:12:36 | watcher/4-deploy-with-precreated-account | +++ oc rsh -c watcher-api watcher-kuttl-api-0 cat /etc/watcher/watcher.conf.d/02-service-custom.conf' - ' logger.go:42: 07:12:36 | watcher/4-deploy-with-precreated-account | ++ echo ''#'' Service config' - ' logger.go:42: 07:12:36 | watcher/4-deploy-with-precreated-account | + ''['' 1 == 1 '']''' - ' logger.go:42: 07:12:36 | watcher/4-deploy-with-precreated-account | +++ oc rsh -c watcher-api watcher-kuttl-api-0 cat /etc/watcher/watcher.conf.d/00-default.conf' - ' logger.go:42: 07:12:36 | watcher/4-deploy-with-precreated-account | ++ grep -czPo ''\[prometheus_client\]\s+host\s+=\s+metric-storage-prometheus.watcher-kuttl-default.svc\s+port\s+=\s+9090\s+cafile\s+=\s+/etc/pki/ca-trust/extracted/pem/prometheus/internal-ca-bundle.pem''' - ' logger.go:42: 07:12:37 | watcher/4-deploy-with-precreated-account | ++ echo ''[DEFAULT]'' state_path = /var/lib/watcher transport_url = ''rabbit://**********=1'' control_exchange = watcher debug = True log_file = /var/log/watcher/watcher-kuttl-api.log ''#'' empty notification_level means that no notification will be sent notification_level = ''[database]'' connection = ''mysql+pymysql://watcher_test:5de8c0dffb799bdb4516f938bcffc35d@openstack.watcher-kuttl-default.svc/watcher?read_default_file=/etc/my.cnf'' ''[oslo_policy]'' policy_file = /etc/watcher/policy.yaml.sample ''[oslo_messaging_notifications]'' driver = noop ''[oslo_messaging_rabbit]'' rabbit_quorum_queue=true rabbit_transient_quorum_queue=true amqp_durable_queues=true ''[keystone_authtoken]'' memcached_servers=memcached-0.memcached.watcher-kuttl-default.svc:11212 tls_enabled=true memcache_tls_certfile = /etc/pki/tls/certs/mtls.crt memcache_tls_keyfile = /etc/pki/tls/private/mtls.key memcache_tls_cafile = /etc/pki/tls/certs/mtls-ca.crt memcache_tls_enabled = true project_domain_name = Default project_name = service user_domain_name = Default password = pa**********rd username = watcher auth_url = https://keystone-internal.watcher-kuttl-default.svc:5000 interface = internal auth_type = password cafile = /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem ''[watcher_clients_auth]'' project_domain_name = Default project_name = service user_domain_name = Default password = pa**********rd username = watcher auth_url = https://keystone-internal.watcher-kuttl-default.svc:5000 interface = internal auth_type = password cafile = /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem ''[oslo_concurrency]'' lock_path = /var/lib/watcher/tmp ''[watcher_datasources]'' datasources = prometheus ''[cache]'' backend = oslo_cache.memcache_pool memcache_servers=memcached-0.memcached.watcher-kuttl-default.svc:11212 memcache_socket_timeout = 0.5 memcache_pool_connection_get_timeout = 1 enabled=true tls_enabled=true tls_certfile=/etc/pki/tls/certs/mtls.crt tls_keyfile=/etc/pki/tls/private/mtls.key tls_cafile=/etc/pki/tls/certs/mtls-ca.crt memcache_dead_retry = 30 ''[prometheus_client]'' host = metric-storage-prometheus.watcher-kuttl-default.svc port = 9090 cafile = /etc/pki/ca-trust/extracted/pem/prometheus/internal-ca-bundle.pem ''[cinder_client]'' endpoint_type = internal ''[glance_client]'' endpoint_type = internal ''[ironic_client]'' endpoint_type = internal ''[keystone_client]'' interface = internal ''[neutron_client]'' endpoint_type = internal ''[nova_client]'' endpoint_type = internal ''[placement_client]'' interface = internal ''[watcher_cluster_data_model_collectors.compute]'' period = 900 ''[watcher_cluster_data_model_collectors.baremetal]'' period = 900 ''[watcher_cluster_data_model_collectors.storage]'' period = 900' - ' logger.go:42: 07:12:37 | watcher/4-deploy-with-precreated-account | + ''['' 1 == 1 '']''' - ' logger.go:42: 07:12:37 | watcher/4-deploy-with-precreated-account | ++ grep -czPo ''TimeOut 80''' - ' logger.go:42: 07:12:37 | watcher/4-deploy-with-precreated-account | +++ oc rsh -c watcher-api watcher-kuttl-api-0 cat /etc/httpd/conf.d/10-watcher-wsgi-main.conf' - ' logger.go:42: 07:12:37 | watcher/4-deploy-with-precreated-account | ++ echo ''#'' internal vhost watcher-internal.watcher-kuttl-default.svc configuration '''' ServerName watcher-internal.watcher-kuttl-default.svc ''##'' Vhost docroot DocumentRoot ''"/var/www/cgi-bin"'' ''#'' Set the timeout for the watcher-api TimeOut 80 ''##'' Directories, there should at least be a declaration for /var/www/cgi-bin '''' Options -Indexes +FollowSymLinks +MultiViews AllowOverride None Require all granted '''' ''##'' Logging ErrorLog /dev/stdout ServerSignature Off CustomLog /dev/stdout combined ''env=!forwarded'' CustomLog /dev/stdout proxy env=forwarded ''##'' set watcher log level to debug LogLevel debug ''##'' WSGI configuration WSGIApplicationGroup ''%{GLOBAL}'' WSGIDaemonProcess internal display-name=internal group=watcher processes=2 threads=1 user=watcher WSGIProcessGroup internal WSGIScriptAlias / ''"/usr/bin/watcher-api-wsgi"'' '''' ''#'' public vhost watcher-public.watcher-kuttl-default.svc configuration '''' ServerName watcher-public.watcher-kuttl-default.svc ''##'' Vhost docroot DocumentRoot ''"/var/www/cgi-bin"'' ''#'' Set the timeout for the watcher-api TimeOut 80 ''##'' Directories, there should at least be a declaration for /var/www/cgi-bin '''' Options -Indexes +FollowSymLinks +MultiViews AllowOverride None Require all granted '''' ''##'' Logging ErrorLog /dev/stdout ServerSignature Off CustomLog /dev/stdout combined ''env=!forwarded'' CustomLog /dev/stdout proxy env=forwarded ''##'' set watcher log level to debug LogLevel debug ''##'' WSGI configuration WSGIApplicationGroup ''%{GLOBAL}'' WSGIDaemonProcess public display-name=public group=watcher processes=2 threads=1 user=watcher WSGIProcessGroup public WSGIScriptAlias / ''"/usr/bin/watcher-api-wsgi"'' ''''' - ' logger.go:42: 07:12:37 | watcher/4-deploy-with-precreated-account | + ''['' 1 == 1 '']''' - ' logger.go:42: 07:12:38 | watcher/4-deploy-with-precreated-account | running command: [sh -c set -euxo pipefail' - ' oc project watcher-kuttl-default' - ' APIPOD=$(oc get pods -n watcher-kuttl-default -l "service=watcher-api" -ocustom-columns=:metadata.name|grep -v ^$|head -1)' - ' if [ -n "${APIPOD}" ]; then' - ' [ $(echo $(oc rsh -c watcher-api ${APIPOD} cat /etc/watcher/watcher.conf.d/01-global-custom.conf) |grep -c "^# Global config") == 1 ]' - ' [ $(echo $(oc rsh -c watcher-api ${APIPOD} cat /etc/watcher/watcher.conf.d/02-service-custom.conf) |grep -c "^# Service config") == 1 ]' - ' [ $(echo $(oc rsh -c watcher-api ${APIPOD} cat /etc/watcher/watcher.conf.d/00-default.conf) |grep -czPo ''\[prometheus_client\]\s+host\s+=\s+metric-storage-prometheus.watcher-kuttl-default.svc\s+port\s+=\s+9090\s+cafile\s+=\s+/etc/pki/ca-trust/extracted/pem/prometheus/internal-ca-bundle.pem'') == 1 ]' - ' [ $(echo $(oc rsh -c watcher-api ${APIPOD} cat /etc/httpd/conf.d/10-watcher-wsgi-main.conf) |grep -czPo ''TimeOut 80'') == 1 ]' - ' else' - ' exit 1' - ' fi' - ' ]' - ' logger.go:42: 07:12:38 | watcher/4-deploy-with-precreated-account | + oc project watcher-kuttl-default' - ' logger.go:42: 07:12:38 | watcher/4-deploy-with-precreated-account | Already on project "watcher-kuttl-default" on server "https://api.crc.testing:6443".' - ' logger.go:42: 07:12:38 | watcher/4-deploy-with-precreated-account | ++ oc get pods -n watcher-kuttl-default -l service=watcher-api -ocustom-columns=:metadata.name' - ' logger.go:42: 07:12:38 | watcher/4-deploy-with-precreated-account | ++ head -1' - ' logger.go:42: 07:12:38 | watcher/4-deploy-with-precreated-account | ++ grep -v ''^$''' - ' logger.go:42: 07:12:38 | watcher/4-deploy-with-precreated-account | + APIPOD=watcher-kuttl-api-0' - ' logger.go:42: 07:12:38 | watcher/4-deploy-with-precreated-account | + ''['' -n watcher-kuttl-api-0 '']''' - ' logger.go:42: 07:12:38 | watcher/4-deploy-with-precreated-account | ++ grep -c ''^# Global config''' - ' logger.go:42: 07:12:38 | watcher/4-deploy-with-precreated-account | +++ oc rsh -c watcher-api watcher-kuttl-api-0 cat /etc/watcher/watcher.conf.d/01-global-custom.conf' - ' logger.go:42: 07:12:38 | watcher/4-deploy-with-precreated-account | ++ echo ''#'' Global config' - ' logger.go:42: 07:12:38 | watcher/4-deploy-with-precreated-account | + ''['' 1 == 1 '']''' - ' logger.go:42: 07:12:38 | watcher/4-deploy-with-precreated-account | ++ grep -c ''^# Service config''' - ' logger.go:42: 07:12:38 | watcher/4-deploy-with-precreated-account | +++ oc rsh -c watcher-api watcher-kuttl-api-0 cat /etc/watcher/watcher.conf.d/02-service-custom.conf' - ' logger.go:42: 07:12:39 | watcher/4-deploy-with-precreated-account | ++ echo ''#'' Service config' - ' logger.go:42: 07:12:39 | watcher/4-deploy-with-precreated-account | + ''['' 1 == 1 '']''' - ' logger.go:42: 07:12:39 | watcher/4-deploy-with-precreated-account | ++ grep -czPo ''\[prometheus_client\]\s+host\s+=\s+metric-storage-prometheus.watcher-kuttl-default.svc\s+port\s+=\s+9090\s+cafile\s+=\s+/etc/pki/ca-trust/extracted/pem/prometheus/internal-ca-bundle.pem''' - ' logger.go:42: 07:12:39 | watcher/4-deploy-with-precreated-account | +++ oc rsh -c watcher-api watcher-kuttl-api-0 cat /etc/watcher/watcher.conf.d/00-default.conf' - ' logger.go:42: 07:12:39 | watcher/4-deploy-with-precreated-account | ++ echo ''[DEFAULT]'' state_path = /var/lib/watcher transport_url = ''rabbit://**********=1'' control_exchange = watcher debug = True log_file = /var/log/watcher/watcher-kuttl-api.log ''#'' empty notification_level means that no notification will be sent notification_level = ''[database]'' connection = ''mysql+pymysql://watcher_test:5de8c0dffb799bdb4516f938bcffc35d@openstack.watcher-kuttl-default.svc/watcher?read_default_file=/etc/my.cnf'' ''[oslo_policy]'' policy_file = /etc/watcher/policy.yaml.sample ''[oslo_messaging_notifications]'' driver = noop ''[oslo_messaging_rabbit]'' rabbit_quorum_queue=true rabbit_transient_quorum_queue=true amqp_durable_queues=true ''[keystone_authtoken]'' memcached_servers=memcached-0.memcached.watcher-kuttl-default.svc:11212 tls_enabled=true memcache_tls_certfile = /etc/pki/tls/certs/mtls.crt memcache_tls_keyfile = /etc/pki/tls/private/mtls.key memcache_tls_cafile = /etc/pki/tls/certs/mtls-ca.crt memcache_tls_enabled = true project_domain_name = Default project_name = service user_domain_name = Default password = pa**********rd username = watcher auth_url = https://keystone-internal.watcher-kuttl-default.svc:5000 interface = internal auth_type = password cafile = /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem ''[watcher_clients_auth]'' project_domain_name = Default project_name = service user_domain_name = Default password = pa**********rd username = watcher auth_url = https://keystone-internal.watcher-kuttl-default.svc:5000 interface = internal auth_type = password cafile = /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem ''[oslo_concurrency]'' lock_path = /var/lib/watcher/tmp ''[watcher_datasources]'' datasources = prometheus ''[cache]'' backend = oslo_cache.memcache_pool memcache_servers=memcached-0.memcached.watcher-kuttl-default.svc:11212 memcache_socket_timeout = 0.5 memcache_pool_connection_get_timeout = 1 enabled=true tls_enabled=true tls_certfile=/etc/pki/tls/certs/mtls.crt tls_keyfile=/etc/pki/tls/private/mtls.key tls_cafile=/etc/pki/tls/certs/mtls-ca.crt memcache_dead_retry = 30 ''[prometheus_client]'' host = metric-storage-prometheus.watcher-kuttl-default.svc port = 9090 cafile = /etc/pki/ca-trust/extracted/pem/prometheus/internal-ca-bundle.pem ''[cinder_client]'' endpoint_type = internal ''[glance_client]'' endpoint_type = internal ''[ironic_client]'' endpoint_type = internal ''[keystone_client]'' interface = internal ''[neutron_client]'' endpoint_type = internal ''[nova_client]'' endpoint_type = internal ''[placement_client]'' interface = internal ''[watcher_cluster_data_model_collectors.compute]'' period = 900 ''[watcher_cluster_data_model_collectors.baremetal]'' period = 900 ''[watcher_cluster_data_model_collectors.storage]'' period = 900' - ' logger.go:42: 07:12:39 | watcher/4-deploy-with-precreated-account | + ''['' 1 == 1 '']''' - ' logger.go:42: 07:12:39 | watcher/4-deploy-with-precreated-account | +++ oc rsh -c watcher-api watcher-kuttl-api-0 cat /etc/httpd/conf.d/10-watcher-wsgi-main.conf' - ' logger.go:42: 07:12:39 | watcher/4-deploy-with-precreated-account | ++ grep -czPo ''TimeOut 80''' - ' logger.go:42: 07:12:39 | watcher/4-deploy-with-precreated-account | ++ echo ''#'' internal vhost watcher-internal.watcher-kuttl-default.svc configuration '''' ServerName watcher-internal.watcher-kuttl-default.svc ''##'' Vhost docroot DocumentRoot ''"/var/www/cgi-bin"'' ''#'' Set the timeout for the watcher-api TimeOut 80 ''##'' Directories, there should at least be a declaration for /var/www/cgi-bin '''' Options -Indexes +FollowSymLinks +MultiViews AllowOverride None Require all granted '''' ''##'' Logging ErrorLog /dev/stdout ServerSignature Off CustomLog /dev/stdout combined ''env=!forwarded'' CustomLog /dev/stdout proxy env=forwarded ''##'' set watcher log level to debug LogLevel debug ''##'' WSGI configuration WSGIApplicationGroup ''%{GLOBAL}'' WSGIDaemonProcess internal display-name=internal group=watcher processes=2 threads=1 user=watcher WSGIProcessGroup internal WSGIScriptAlias / ''"/usr/bin/watcher-api-wsgi"'' '''' ''#'' public vhost watcher-public.watcher-kuttl-default.svc configuration '''' ServerName watcher-public.watcher-kuttl-default.svc ''##'' Vhost docroot DocumentRoot ''"/var/www/cgi-bin"'' ''#'' Set the timeout for the watcher-api TimeOut 80 ''##'' Directories, there should at least be a declaration for /var/www/cgi-bin '''' Options -Indexes +FollowSymLinks +MultiViews AllowOverride None Require all granted '''' ''##'' Logging ErrorLog /dev/stdout ServerSignature Off CustomLog /dev/stdout combined ''env=!forwarded'' CustomLog /dev/stdout proxy env=forwarded ''##'' set watcher log level to debug LogLevel debug ''##'' WSGI configuration WSGIApplicationGroup ''%{GLOBAL}'' WSGIDaemonProcess public display-name=public group=watcher processes=2 threads=1 user=watcher WSGIProcessGroup public WSGIScriptAlias / ''"/usr/bin/watcher-api-wsgi"'' ''''' - ' logger.go:42: 07:12:39 | watcher/4-deploy-with-precreated-account | + ''['' 1 == 1 '']''' - ' logger.go:42: 07:12:40 | watcher/4-deploy-with-precreated-account | running command: [sh -c set -euxo pipefail' - ' oc project watcher-kuttl-default' - ' APIPOD=$(oc get pods -n watcher-kuttl-default -l "service=watcher-api" -ocustom-columns=:metadata.name|grep -v ^$|head -1)' - ' if [ -n "${APIPOD}" ]; then' - ' [ $(echo $(oc rsh -c watcher-api ${APIPOD} cat /etc/watcher/watcher.conf.d/01-global-custom.conf) |grep -c "^# Global config") == 1 ]' - ' [ $(echo $(oc rsh -c watcher-api ${APIPOD} cat /etc/watcher/watcher.conf.d/02-service-custom.conf) |grep -c "^# Service config") == 1 ]' - ' [ $(echo $(oc rsh -c watcher-api ${APIPOD} cat /etc/watcher/watcher.conf.d/00-default.conf) |grep -czPo ''\[prometheus_client\]\s+host\s+=\s+metric-storage-prometheus.watcher-kuttl-default.svc\s+port\s+=\s+9090\s+cafile\s+=\s+/etc/pki/ca-trust/extracted/pem/prometheus/internal-ca-bundle.pem'') == 1 ]' - ' [ $(echo $(oc rsh -c watcher-api ${APIPOD} cat /etc/httpd/conf.d/10-watcher-wsgi-main.conf) |grep -czPo ''TimeOut 80'') == 1 ]' - ' else' - ' exit 1' - ' fi' - ' ]' - ' logger.go:42: 07:12:40 | watcher/4-deploy-with-precreated-account | + oc project watcher-kuttl-default' - ' logger.go:42: 07:12:40 | watcher/4-deploy-with-precreated-account | Already on project "watcher-kuttl-default" on server "https://api.crc.testing:6443".' - ' logger.go:42: 07:12:40 | watcher/4-deploy-with-precreated-account | ++ oc get pods -n watcher-kuttl-default -l service=watcher-api -ocustom-columns=:metadata.name' - ' logger.go:42: 07:12:40 | watcher/4-deploy-with-precreated-account | ++ head -1' - ' logger.go:42: 07:12:40 | watcher/4-deploy-with-precreated-account | ++ grep -v ''^$''' - ' logger.go:42: 07:12:41 | watcher/4-deploy-with-precreated-account | + APIPOD=watcher-kuttl-api-0' - ' logger.go:42: 07:12:41 | watcher/4-deploy-with-precreated-account | + ''['' -n watcher-kuttl-api-0 '']''' - ' logger.go:42: 07:12:41 | watcher/4-deploy-with-precreated-account | ++ grep -c ''^# Global config''' - ' logger.go:42: 07:12:41 | watcher/4-deploy-with-precreated-account | +++ oc rsh -c watcher-api watcher-kuttl-api-0 cat /etc/watcher/watcher.conf.d/01-global-custom.conf' - ' logger.go:42: 07:12:41 | watcher/4-deploy-with-precreated-account | ++ echo ''#'' Global config' - ' logger.go:42: 07:12:41 | watcher/4-deploy-with-precreated-account | + ''['' 1 == 1 '']''' - ' logger.go:42: 07:12:41 | watcher/4-deploy-with-precreated-account | ++ grep -c ''^# Service config''' - ' logger.go:42: 07:12:41 | watcher/4-deploy-with-precreated-account | +++ oc rsh -c watcher-api watcher-kuttl-api-0 cat /etc/watcher/watcher.conf.d/02-service-custom.conf' - ' logger.go:42: 07:12:41 | watcher/4-deploy-with-precreated-account | ++ echo ''#'' Service config' - ' logger.go:42: 07:12:41 | watcher/4-deploy-with-precreated-account | + ''['' 1 == 1 '']''' - ' logger.go:42: 07:12:41 | watcher/4-deploy-with-precreated-account | ++ grep -czPo ''\[prometheus_client\]\s+host\s+=\s+metric-storage-prometheus.watcher-kuttl-default.svc\s+port\s+=\s+9090\s+cafile\s+=\s+/etc/pki/ca-trust/extracted/pem/prometheus/internal-ca-bundle.pem''' - ' logger.go:42: 07:12:41 | watcher/4-deploy-with-precreated-account | +++ oc rsh -c watcher-api watcher-kuttl-api-0 cat /etc/watcher/watcher.conf.d/00-default.conf' - ' logger.go:42: 07:12:41 | watcher/4-deploy-with-precreated-account | ++ echo ''[DEFAULT]'' state_path = /var/lib/watcher transport_url = ''rabbit://**********=1'' control_exchange = watcher debug = True log_file = /var/log/watcher/watcher-kuttl-api.log ''#'' empty notification_level means that no notification will be sent notification_level = ''[database]'' connection = ''mysql+pymysql://watcher_test:5de8c0dffb799bdb4516f938bcffc35d@openstack.watcher-kuttl-default.svc/watcher?read_default_file=/etc/my.cnf'' ''[oslo_policy]'' policy_file = /etc/watcher/policy.yaml.sample ''[oslo_messaging_notifications]'' driver = noop ''[oslo_messaging_rabbit]'' rabbit_quorum_queue=true rabbit_transient_quorum_queue=true amqp_durable_queues=true ''[keystone_authtoken]'' memcached_servers=memcached-0.memcached.watcher-kuttl-default.svc:11212 tls_enabled=true memcache_tls_certfile = /etc/pki/tls/certs/mtls.crt memcache_tls_keyfile = /etc/pki/tls/private/mtls.key memcache_tls_cafile = /etc/pki/tls/certs/mtls-ca.crt memcache_tls_enabled = true project_domain_name = Default project_name = service user_domain_name = Default password = pa**********rd username = watcher auth_url = https://keystone-internal.watcher-kuttl-default.svc:5000 interface = internal auth_type = password cafile = /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem ''[watcher_clients_auth]'' project_domain_name = Default project_name = service user_domain_name = Default password = pa**********rd username = watcher auth_url = https://keystone-internal.watcher-kuttl-default.svc:5000 interface = internal auth_type = password cafile = /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem ''[oslo_concurrency]'' lock_path = /var/lib/watcher/tmp ''[watcher_datasources]'' datasources = prometheus ''[cache]'' backend = oslo_cache.memcache_pool memcache_servers=memcached-0.memcached.watcher-kuttl-default.svc:11212 memcache_socket_timeout = 0.5 memcache_pool_connection_get_timeout = 1 enabled=true tls_enabled=true tls_certfile=/etc/pki/tls/certs/mtls.crt tls_keyfile=/etc/pki/tls/private/mtls.key tls_cafile=/etc/pki/tls/certs/mtls-ca.crt memcache_dead_retry = 30 ''[prometheus_client]'' host = metric-storage-prometheus.watcher-kuttl-default.svc port = 9090 cafile = /etc/pki/ca-trust/extracted/pem/prometheus/internal-ca-bundle.pem ''[cinder_client]'' endpoint_type = internal ''[glance_client]'' endpoint_type = internal ''[ironic_client]'' endpoint_type = internal ''[keystone_client]'' interface = internal ''[neutron_client]'' endpoint_type = internal ''[nova_client]'' endpoint_type = internal ''[placement_client]'' interface = internal ''[watcher_cluster_data_model_collectors.compute]'' period = 900 ''[watcher_cluster_data_model_collectors.baremetal]'' period = 900 ''[watcher_cluster_data_model_collectors.storage]'' period = 900' - ' logger.go:42: 07:12:41 | watcher/4-deploy-with-precreated-account | + ''['' 1 == 1 '']''' - ' logger.go:42: 07:12:41 | watcher/4-deploy-with-precreated-account | +++ oc rsh -c watcher-api watcher-kuttl-api-0 cat /etc/httpd/conf.d/10-watcher-wsgi-main.conf' - ' logger.go:42: 07:12:41 | watcher/4-deploy-with-precreated-account | ++ grep -czPo ''TimeOut 80''' - ' logger.go:42: 07:12:41 | watcher/4-deploy-with-precreated-account | ++ echo ''#'' internal vhost watcher-internal.watcher-kuttl-default.svc configuration '''' ServerName watcher-internal.watcher-kuttl-default.svc ''##'' Vhost docroot DocumentRoot ''"/var/www/cgi-bin"'' ''#'' Set the timeout for the watcher-api TimeOut 80 ''##'' Directories, there should at least be a declaration for /var/www/cgi-bin '''' Options -Indexes +FollowSymLinks +MultiViews AllowOverride None Require all granted '''' ''##'' Logging ErrorLog /dev/stdout ServerSignature Off CustomLog /dev/stdout combined ''env=!forwarded'' CustomLog /dev/stdout proxy env=forwarded ''##'' set watcher log level to debug LogLevel debug ''##'' WSGI configuration WSGIApplicationGroup ''%{GLOBAL}'' WSGIDaemonProcess internal display-name=internal group=watcher processes=2 threads=1 user=watcher WSGIProcessGroup internal WSGIScriptAlias / ''"/usr/bin/watcher-api-wsgi"'' '''' ''#'' public vhost watcher-public.watcher-kuttl-default.svc configuration '''' ServerName watcher-public.watcher-kuttl-default.svc ''##'' Vhost docroot DocumentRoot ''"/var/www/cgi-bin"'' ''#'' Set the timeout for the watcher-api TimeOut 80 ''##'' Directories, there should at least be a declaration for /var/www/cgi-bin '''' Options -Indexes +FollowSymLinks +MultiViews AllowOverride None Require all granted '''' ''##'' Logging ErrorLog /dev/stdout ServerSignature Off CustomLog /dev/stdout combined ''env=!forwarded'' CustomLog /dev/stdout proxy env=forwarded ''##'' set watcher log level to debug LogLevel debug ''##'' WSGI configuration WSGIApplicationGroup ''%{GLOBAL}'' WSGIDaemonProcess public display-name=public group=watcher processes=2 threads=1 user=watcher WSGIProcessGroup public WSGIScriptAlias / ''"/usr/bin/watcher-api-wsgi"'' ''''' - ' logger.go:42: 07:12:41 | watcher/4-deploy-with-precreated-account | + ''['' 1 == 1 '']''' - ' logger.go:42: 07:12:43 | watcher/4-deploy-with-precreated-account | running command: [sh -c set -euxo pipefail' - ' oc project watcher-kuttl-default' - ' APIPOD=$(oc get pods -n watcher-kuttl-default -l "service=watcher-api" -ocustom-columns=:metadata.name|grep -v ^$|head -1)' - ' if [ -n "${APIPOD}" ]; then' - ' [ $(echo $(oc rsh -c watcher-api ${APIPOD} cat /etc/watcher/watcher.conf.d/01-global-custom.conf) |grep -c "^# Global config") == 1 ]' - ' [ $(echo $(oc rsh -c watcher-api ${APIPOD} cat /etc/watcher/watcher.conf.d/02-service-custom.conf) |grep -c "^# Service config") == 1 ]' - ' [ $(echo $(oc rsh -c watcher-api ${APIPOD} cat /etc/watcher/watcher.conf.d/00-default.conf) |grep -czPo ''\[prometheus_client\]\s+host\s+=\s+metric-storage-prometheus.watcher-kuttl-default.svc\s+port\s+=\s+9090\s+cafile\s+=\s+/etc/pki/ca-trust/extracted/pem/prometheus/internal-ca-bundle.pem'') == 1 ]' - ' [ $(echo $(oc rsh -c watcher-api ${APIPOD} cat /etc/httpd/conf.d/10-watcher-wsgi-main.conf) |grep -czPo ''TimeOut 80'') == 1 ]' - ' else' - ' exit 1' - ' fi' - ' ]' - ' logger.go:42: 07:12:43 | watcher/4-deploy-with-precreated-account | + oc project watcher-kuttl-default' - ' logger.go:42: 07:12:43 | watcher/4-deploy-with-precreated-account | Already on project "watcher-kuttl-default" on server "https://api.crc.testing:6443".' - ' logger.go:42: 07:12:43 | watcher/4-deploy-with-precreated-account | ++ oc get pods -n watcher-kuttl-default -l service=watcher-api -ocustom-columns=:metadata.name' - ' logger.go:42: 07:12:43 | watcher/4-deploy-with-precreated-account | ++ head -1' - ' logger.go:42: 07:12:43 | watcher/4-deploy-with-precreated-account | ++ grep -v ''^$''' - ' logger.go:42: 07:12:43 | watcher/4-deploy-with-precreated-account | + APIPOD=watcher-kuttl-api-0' - ' logger.go:42: 07:12:43 | watcher/4-deploy-with-precreated-account | + ''['' -n watcher-kuttl-api-0 '']''' - ' logger.go:42: 07:12:43 | watcher/4-deploy-with-precreated-account | ++ grep -c ''^# Global config''' - ' logger.go:42: 07:12:43 | watcher/4-deploy-with-precreated-account | +++ oc rsh -c watcher-api watcher-kuttl-api-0 cat /etc/watcher/watcher.conf.d/01-global-custom.conf' - ' logger.go:42: 07:12:43 | watcher/4-deploy-with-precreated-account | ++ echo ''#'' Global config' - ' logger.go:42: 07:12:43 | watcher/4-deploy-with-precreated-account | + ''['' 1 == 1 '']''' - ' logger.go:42: 07:12:43 | watcher/4-deploy-with-precreated-account | ++ grep -c ''^# Service config''' - ' logger.go:42: 07:12:43 | watcher/4-deploy-with-precreated-account | +++ oc rsh -c watcher-api watcher-kuttl-api-0 cat /etc/watcher/watcher.conf.d/02-service-custom.conf' - ' logger.go:42: 07:12:43 | watcher/4-deploy-with-precreated-account | ++ echo ''#'' Service config' - ' logger.go:42: 07:12:43 | watcher/4-deploy-with-precreated-account | + ''['' 1 == 1 '']''' - ' logger.go:42: 07:12:43 | watcher/4-deploy-with-precreated-account | ++ grep -czPo ''\[prometheus_client\]\s+host\s+=\s+metric-storage-prometheus.watcher-kuttl-default.svc\s+port\s+=\s+9090\s+cafile\s+=\s+/etc/pki/ca-trust/extracted/pem/prometheus/internal-ca-bundle.pem''' - ' logger.go:42: 07:12:43 | watcher/4-deploy-with-precreated-account | +++ oc rsh -c watcher-api watcher-kuttl-api-0 cat /etc/watcher/watcher.conf.d/00-default.conf' - ' logger.go:42: 07:12:44 | watcher/4-deploy-with-precreated-account | ++ echo ''[DEFAULT]'' state_path = /var/lib/watcher transport_url = ''rabbit://**********=1'' control_exchange = watcher debug = True log_file = /var/log/watcher/watcher-kuttl-api.log ''#'' empty notification_level means that no notification will be sent notification_level = ''[database]'' connection = ''mysql+pymysql://watcher_test:5de8c0dffb799bdb4516f938bcffc35d@openstack.watcher-kuttl-default.svc/watcher?read_default_file=/etc/my.cnf'' ''[oslo_policy]'' policy_file = /etc/watcher/policy.yaml.sample ''[oslo_messaging_notifications]'' driver = noop ''[oslo_messaging_rabbit]'' rabbit_quorum_queue=true rabbit_transient_quorum_queue=true amqp_durable_queues=true ''[keystone_authtoken]'' memcached_servers=memcached-0.memcached.watcher-kuttl-default.svc:11212 tls_enabled=true memcache_tls_certfile = /etc/pki/tls/certs/mtls.crt memcache_tls_keyfile = /etc/pki/tls/private/mtls.key memcache_tls_cafile = /etc/pki/tls/certs/mtls-ca.crt memcache_tls_enabled = true project_domain_name = Default project_name = service user_domain_name = Default password = pa**********rd username = watcher auth_url = https://keystone-internal.watcher-kuttl-default.svc:5000 interface = internal auth_type = password cafile = /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem ''[watcher_clients_auth]'' project_domain_name = Default project_name = service user_domain_name = Default password = pa**********rd username = watcher auth_url = https://keystone-internal.watcher-kuttl-default.svc:5000 interface = internal auth_type = password cafile = /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem ''[oslo_concurrency]'' lock_path = /var/lib/watcher/tmp ''[watcher_datasources]'' datasources = prometheus ''[cache]'' backend = oslo_cache.memcache_pool memcache_servers=memcached-0.memcached.watcher-kuttl-default.svc:11212 memcache_socket_timeout = 0.5 memcache_pool_connection_get_timeout = 1 enabled=true tls_enabled=true tls_certfile=/etc/pki/tls/certs/mtls.crt tls_keyfile=/etc/pki/tls/private/mtls.key tls_cafile=/etc/pki/tls/certs/mtls-ca.crt memcache_dead_retry = 30 ''[prometheus_client]'' host = metric-storage-prometheus.watcher-kuttl-default.svc port = 9090 cafile = /etc/pki/ca-trust/extracted/pem/prometheus/internal-ca-bundle.pem ''[cinder_client]'' endpoint_type = internal ''[glance_client]'' endpoint_type = internal ''[ironic_client]'' endpoint_type = internal ''[keystone_client]'' interface = internal ''[neutron_client]'' endpoint_type = internal ''[nova_client]'' endpoint_type = internal ''[placement_client]'' interface = internal ''[watcher_cluster_data_model_collectors.compute]'' period = 900 ''[watcher_cluster_data_model_collectors.baremetal]'' period = 900 ''[watcher_cluster_data_model_collectors.storage]'' period = 900' - ' logger.go:42: 07:12:44 | watcher/4-deploy-with-precreated-account | + ''['' 1 == 1 '']''' - ' logger.go:42: 07:12:44 | watcher/4-deploy-with-precreated-account | ++ grep -czPo ''TimeOut 80''' - ' logger.go:42: 07:12:44 | watcher/4-deploy-with-precreated-account | +++ oc rsh -c watcher-api watcher-kuttl-api-0 cat /etc/httpd/conf.d/10-watcher-wsgi-main.conf' - ' logger.go:42: 07:12:44 | watcher/4-deploy-with-precreated-account | ++ echo ''#'' internal vhost watcher-internal.watcher-kuttl-default.svc configuration '''' ServerName watcher-internal.watcher-kuttl-default.svc ''##'' Vhost docroot DocumentRoot ''"/var/www/cgi-bin"'' ''#'' Set the timeout for the watcher-api TimeOut 80 ''##'' Directories, there should at least be a declaration for /var/www/cgi-bin '''' Options -Indexes +FollowSymLinks +MultiViews AllowOverride None Require all granted '''' ''##'' Logging ErrorLog /dev/stdout ServerSignature Off CustomLog /dev/stdout combined ''env=!forwarded'' CustomLog /dev/stdout proxy env=forwarded ''##'' set watcher log level to debug LogLevel debug ''##'' WSGI configuration WSGIApplicationGroup ''%{GLOBAL}'' WSGIDaemonProcess internal display-name=internal group=watcher processes=2 threads=1 user=watcher WSGIProcessGroup internal WSGIScriptAlias / ''"/usr/bin/watcher-api-wsgi"'' '''' ''#'' public vhost watcher-public.watcher-kuttl-default.svc configuration '''' ServerName watcher-public.watcher-kuttl-default.svc ''##'' Vhost docroot DocumentRoot ''"/var/www/cgi-bin"'' ''#'' Set the timeout for the watcher-api TimeOut 80 ''##'' Directories, there should at least be a declaration for /var/www/cgi-bin '''' Options -Indexes +FollowSymLinks +MultiViews AllowOverride None Require all granted '''' ''##'' Logging ErrorLog /dev/stdout ServerSignature Off CustomLog /dev/stdout combined ''env=!forwarded'' CustomLog /dev/stdout proxy env=forwarded ''##'' set watcher log level to debug LogLevel debug ''##'' WSGI configuration WSGIApplicationGroup ''%{GLOBAL}'' WSGIDaemonProcess public display-name=public group=watcher processes=2 threads=1 user=watcher WSGIProcessGroup public WSGIScriptAlias / ''"/usr/bin/watcher-api-wsgi"'' ''''' - ' logger.go:42: 07:12:44 | watcher/4-deploy-with-precreated-account | + ''['' 1 == 1 '']''' - ' logger.go:42: 07:12:45 | watcher/4-deploy-with-precreated-account | running command: [sh -c set -euxo pipefail' - ' oc project watcher-kuttl-default' - ' APIPOD=$(oc get pods -n watcher-kuttl-default -l "service=watcher-api" -ocustom-columns=:metadata.name|grep -v ^$|head -1)' - ' if [ -n "${APIPOD}" ]; then' - ' [ $(echo $(oc rsh -c watcher-api ${APIPOD} cat /etc/watcher/watcher.conf.d/01-global-custom.conf) |grep -c "^# Global config") == 1 ]' - ' [ $(echo $(oc rsh -c watcher-api ${APIPOD} cat /etc/watcher/watcher.conf.d/02-service-custom.conf) |grep -c "^# Service config") == 1 ]' - ' [ $(echo $(oc rsh -c watcher-api ${APIPOD} cat /etc/watcher/watcher.conf.d/00-default.conf) |grep -czPo ''\[prometheus_client\]\s+host\s+=\s+metric-storage-prometheus.watcher-kuttl-default.svc\s+port\s+=\s+9090\s+cafile\s+=\s+/etc/pki/ca-trust/extracted/pem/prometheus/internal-ca-bundle.pem'') == 1 ]' - ' [ $(echo $(oc rsh -c watcher-api ${APIPOD} cat /etc/httpd/conf.d/10-watcher-wsgi-main.conf) |grep -czPo ''TimeOut 80'') == 1 ]' - ' else' - ' exit 1' - ' fi' - ' ]' - ' logger.go:42: 07:12:45 | watcher/4-deploy-with-precreated-account | + oc project watcher-kuttl-default' - ' logger.go:42: 07:12:45 | watcher/4-deploy-with-precreated-account | Already on project "watcher-kuttl-default" on server "https://api.crc.testing:6443".' - ' logger.go:42: 07:12:45 | watcher/4-deploy-with-precreated-account | ++ oc get pods -n watcher-kuttl-default -l service=watcher-api -ocustom-columns=:metadata.name' - ' logger.go:42: 07:12:45 | watcher/4-deploy-with-precreated-account | ++ head -1' - ' logger.go:42: 07:12:45 | watcher/4-deploy-with-precreated-account | ++ grep -v ''^$''' - ' logger.go:42: 07:12:45 | watcher/4-deploy-with-precreated-account | + APIPOD=watcher-kuttl-api-0' - ' logger.go:42: 07:12:45 | watcher/4-deploy-with-precreated-account | + ''['' -n watcher-kuttl-api-0 '']''' - ' logger.go:42: 07:12:45 | watcher/4-deploy-with-precreated-account | ++ grep -c ''^# Global config''' - ' logger.go:42: 07:12:45 | watcher/4-deploy-with-precreated-account | +++ oc rsh -c watcher-api watcher-kuttl-api-0 cat /etc/watcher/watcher.conf.d/01-global-custom.conf' - ' logger.go:42: 07:12:45 | watcher/4-deploy-with-precreated-account | ++ echo ''#'' Global config' - ' logger.go:42: 07:12:45 | watcher/4-deploy-with-precreated-account | + ''['' 1 == 1 '']''' - ' logger.go:42: 07:12:45 | watcher/4-deploy-with-precreated-account | ++ grep -c ''^# Service config''' - ' logger.go:42: 07:12:45 | watcher/4-deploy-with-precreated-account | +++ oc rsh -c watcher-api watcher-kuttl-api-0 cat /etc/watcher/watcher.conf.d/02-service-custom.conf' - ' logger.go:42: 07:12:46 | watcher/4-deploy-with-precreated-account | ++ echo ''#'' Service config' - ' logger.go:42: 07:12:46 | watcher/4-deploy-with-precreated-account | + ''['' 1 == 1 '']''' - ' logger.go:42: 07:12:46 | watcher/4-deploy-with-precreated-account | +++ oc rsh -c watcher-api watcher-kuttl-api-0 cat /etc/watcher/watcher.conf.d/00-default.conf' - ' logger.go:42: 07:12:46 | watcher/4-deploy-with-precreated-account | ++ grep -czPo ''\[prometheus_client\]\s+host\s+=\s+metric-storage-prometheus.watcher-kuttl-default.svc\s+port\s+=\s+9090\s+cafile\s+=\s+/etc/pki/ca-trust/extracted/pem/prometheus/internal-ca-bundle.pem''' - ' logger.go:42: 07:12:46 | watcher/4-deploy-with-precreated-account | ++ echo ''[DEFAULT]'' state_path = /var/lib/watcher transport_url = ''rabbit://**********=1'' control_exchange = watcher debug = True log_file = /var/log/watcher/watcher-kuttl-api.log ''#'' empty notification_level means that no notification will be sent notification_level = ''[database]'' connection = ''mysql+pymysql://watcher_test:5de8c0dffb799bdb4516f938bcffc35d@openstack.watcher-kuttl-default.svc/watcher?read_default_file=/etc/my.cnf'' ''[oslo_policy]'' policy_file = /etc/watcher/policy.yaml.sample ''[oslo_messaging_notifications]'' driver = noop ''[oslo_messaging_rabbit]'' rabbit_quorum_queue=true rabbit_transient_quorum_queue=true amqp_durable_queues=true ''[keystone_authtoken]'' memcached_servers=memcached-0.memcached.watcher-kuttl-default.svc:11212 tls_enabled=true memcache_tls_certfile = /etc/pki/tls/certs/mtls.crt memcache_tls_keyfile = /etc/pki/tls/private/mtls.key memcache_tls_cafile = /etc/pki/tls/certs/mtls-ca.crt memcache_tls_enabled = true project_domain_name = Default project_name = service user_domain_name = Default password = pa**********rd username = watcher auth_url = https://keystone-internal.watcher-kuttl-default.svc:5000 interface = internal auth_type = password cafile = /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem ''[watcher_clients_auth]'' project_domain_name = Default project_name = service user_domain_name = Default password = pa**********rd username = watcher auth_url = https://keystone-internal.watcher-kuttl-default.svc:5000 interface = internal auth_type = password cafile = /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem ''[oslo_concurrency]'' lock_path = /var/lib/watcher/tmp ''[watcher_datasources]'' datasources = prometheus ''[cache]'' backend = oslo_cache.memcache_pool memcache_servers=memcached-0.memcached.watcher-kuttl-default.svc:11212 memcache_socket_timeout = 0.5 memcache_pool_connection_get_timeout = 1 enabled=true tls_enabled=true tls_certfile=/etc/pki/tls/certs/mtls.crt tls_keyfile=/etc/pki/tls/private/mtls.key tls_cafile=/etc/pki/tls/certs/mtls-ca.crt memcache_dead_retry = 30 ''[prometheus_client]'' host = metric-storage-prometheus.watcher-kuttl-default.svc port = 9090 cafile = /etc/pki/ca-trust/extracted/pem/prometheus/internal-ca-bundle.pem ''[cinder_client]'' endpoint_type = internal ''[glance_client]'' endpoint_type = internal ''[ironic_client]'' endpoint_type = internal ''[keystone_client]'' interface = internal ''[neutron_client]'' endpoint_type = internal ''[nova_client]'' endpoint_type = internal ''[placement_client]'' interface = internal ''[watcher_cluster_data_model_collectors.compute]'' period = 900 ''[watcher_cluster_data_model_collectors.baremetal]'' period = 900 ''[watcher_cluster_data_model_collectors.storage]'' period = 900' - ' logger.go:42: 07:12:46 | watcher/4-deploy-with-precreated-account | + ''['' 1 == 1 '']''' - ' logger.go:42: 07:12:46 | watcher/4-deploy-with-precreated-account | ++ grep -czPo ''TimeOut 80''' - ' logger.go:42: 07:12:46 | watcher/4-deploy-with-precreated-account | +++ oc rsh -c watcher-api watcher-kuttl-api-0 cat /etc/httpd/conf.d/10-watcher-wsgi-main.conf' - ' logger.go:42: 07:12:47 | watcher/4-deploy-with-precreated-account | ++ echo ''#'' internal vhost watcher-internal.watcher-kuttl-default.svc configuration '''' ServerName watcher-internal.watcher-kuttl-default.svc ''##'' Vhost docroot DocumentRoot ''"/var/www/cgi-bin"'' ''#'' Set the timeout for the watcher-api TimeOut 80 ''##'' Directories, there should at least be a declaration for /var/www/cgi-bin '''' Options -Indexes +FollowSymLinks +MultiViews AllowOverride None Require all granted '''' ''##'' Logging ErrorLog /dev/stdout ServerSignature Off CustomLog /dev/stdout combined ''env=!forwarded'' CustomLog /dev/stdout proxy env=forwarded ''##'' set watcher log level to debug LogLevel debug ''##'' WSGI configuration WSGIApplicationGroup ''%{GLOBAL}'' WSGIDaemonProcess internal display-name=internal group=watcher processes=2 threads=1 user=watcher WSGIProcessGroup internal WSGIScriptAlias / ''"/usr/bin/watcher-api-wsgi"'' '''' ''#'' public vhost watcher-public.watcher-kuttl-default.svc configuration '''' ServerName watcher-public.watcher-kuttl-default.svc ''##'' Vhost docroot DocumentRoot ''"/var/www/cgi-bin"'' ''#'' Set the timeout for the watcher-api TimeOut 80 ''##'' Directories, there should at least be a declaration for /var/www/cgi-bin '''' Options -Indexes +FollowSymLinks +MultiViews AllowOverride None Require all granted '''' ''##'' Logging ErrorLog /dev/stdout ServerSignature Off CustomLog /dev/stdout combined ''env=!forwarded'' CustomLog /dev/stdout proxy env=forwarded ''##'' set watcher log level to debug LogLevel debug ''##'' WSGI configuration WSGIApplicationGroup ''%{GLOBAL}'' WSGIDaemonProcess public display-name=public group=watcher processes=2 threads=1 user=watcher WSGIProcessGroup public WSGIScriptAlias / ''"/usr/bin/watcher-api-wsgi"'' ''''' - ' logger.go:42: 07:12:47 | watcher/4-deploy-with-precreated-account | + ''['' 1 == 1 '']''' - ' logger.go:42: 07:12:49 | watcher/4-deploy-with-precreated-account | running command: [sh -c set -euxo pipefail' - ' oc project watcher-kuttl-default' - ' APIPOD=$(oc get pods -n watcher-kuttl-default -l "service=watcher-api" -ocustom-columns=:metadata.name|grep -v ^$|head -1)' - ' if [ -n "${APIPOD}" ]; then' - ' [ $(echo $(oc rsh -c watcher-api ${APIPOD} cat /etc/watcher/watcher.conf.d/01-global-custom.conf) |grep -c "^# Global config") == 1 ]' - ' [ $(echo $(oc rsh -c watcher-api ${APIPOD} cat /etc/watcher/watcher.conf.d/02-service-custom.conf) |grep -c "^# Service config") == 1 ]' - ' [ $(echo $(oc rsh -c watcher-api ${APIPOD} cat /etc/watcher/watcher.conf.d/00-default.conf) |grep -czPo ''\[prometheus_client\]\s+host\s+=\s+metric-storage-prometheus.watcher-kuttl-default.svc\s+port\s+=\s+9090\s+cafile\s+=\s+/etc/pki/ca-trust/extracted/pem/prometheus/internal-ca-bundle.pem'') == 1 ]' - ' [ $(echo $(oc rsh -c watcher-api ${APIPOD} cat /etc/httpd/conf.d/10-watcher-wsgi-main.conf) |grep -czPo ''TimeOut 80'') == 1 ]' - ' else' - ' exit 1' - ' fi' - ' ]' - ' logger.go:42: 07:12:49 | watcher/4-deploy-with-precreated-account | + oc project watcher-kuttl-default' - ' logger.go:42: 07:12:49 | watcher/4-deploy-with-precreated-account | Already on project "watcher-kuttl-default" on server "https://api.crc.testing:6443".' - ' logger.go:42: 07:12:49 | watcher/4-deploy-with-precreated-account | ++ oc get pods -n watcher-kuttl-default -l service=watcher-api -ocustom-columns=:metadata.name' - ' logger.go:42: 07:12:49 | watcher/4-deploy-with-precreated-account | ++ head -1' - ' logger.go:42: 07:12:49 | watcher/4-deploy-with-precreated-account | ++ grep -v ''^$''' - ' logger.go:42: 07:12:49 | watcher/4-deploy-with-precreated-account | + APIPOD=watcher-kuttl-api-0' - ' logger.go:42: 07:12:49 | watcher/4-deploy-with-precreated-account | + ''['' -n watcher-kuttl-api-0 '']''' - ' logger.go:42: 07:12:49 | watcher/4-deploy-with-precreated-account | ++ grep -c ''^# Global config''' - ' logger.go:42: 07:12:49 | watcher/4-deploy-with-precreated-account | +++ oc rsh -c watcher-api watcher-kuttl-api-0 cat /etc/watcher/watcher.conf.d/01-global-custom.conf' - ' logger.go:42: 07:12:49 | watcher/4-deploy-with-precreated-account | ++ echo ''#'' Global config' - ' logger.go:42: 07:12:49 | watcher/4-deploy-with-precreated-account | + ''['' 1 == 1 '']''' - ' logger.go:42: 07:12:49 | watcher/4-deploy-with-precreated-account | ++ grep -c ''^# Service config''' - ' logger.go:42: 07:12:49 | watcher/4-deploy-with-precreated-account | +++ oc rsh -c watcher-api watcher-kuttl-api-0 cat /etc/watcher/watcher.conf.d/02-service-custom.conf' - ' logger.go:42: 07:12:49 | watcher/4-deploy-with-precreated-account | ++ echo ''#'' Service config' - ' logger.go:42: 07:12:49 | watcher/4-deploy-with-precreated-account | + ''['' 1 == 1 '']''' - ' logger.go:42: 07:12:49 | watcher/4-deploy-with-precreated-account | ++ grep -czPo ''\[prometheus_client\]\s+host\s+=\s+metric-storage-prometheus.watcher-kuttl-default.svc\s+port\s+=\s+9090\s+cafile\s+=\s+/etc/pki/ca-trust/extracted/pem/prometheus/internal-ca-bundle.pem''' - ' logger.go:42: 07:12:49 | watcher/4-deploy-with-precreated-account | +++ oc rsh -c watcher-api watcher-kuttl-api-0 cat /etc/watcher/watcher.conf.d/00-default.conf' - ' logger.go:42: 07:12:50 | watcher/4-deploy-with-precreated-account | ++ echo ''[DEFAULT]'' state_path = /var/lib/watcher transport_url = ''rabbit://**********=1'' control_exchange = watcher debug = True log_file = /var/log/watcher/watcher-kuttl-api.log ''#'' empty notification_level means that no notification will be sent notification_level = ''[database]'' connection = ''mysql+pymysql://watcher_test:5de8c0dffb799bdb4516f938bcffc35d@openstack.watcher-kuttl-default.svc/watcher?read_default_file=/etc/my.cnf'' ''[oslo_policy]'' policy_file = /etc/watcher/policy.yaml.sample ''[oslo_messaging_notifications]'' driver = noop ''[oslo_messaging_rabbit]'' rabbit_quorum_queue=true rabbit_transient_quorum_queue=true amqp_durable_queues=true ''[keystone_authtoken]'' memcached_servers=memcached-0.memcached.watcher-kuttl-default.svc:11212 tls_enabled=true memcache_tls_certfile = /etc/pki/tls/certs/mtls.crt memcache_tls_keyfile = /etc/pki/tls/private/mtls.key memcache_tls_cafile = /etc/pki/tls/certs/mtls-ca.crt memcache_tls_enabled = true project_domain_name = Default project_name = service user_domain_name = Default password = pa**********rd username = watcher auth_url = https://keystone-internal.watcher-kuttl-default.svc:5000 interface = internal auth_type = password cafile = /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem ''[watcher_clients_auth]'' project_domain_name = Default project_name = service user_domain_name = Default password = pa**********rd username = watcher auth_url = https://keystone-internal.watcher-kuttl-default.svc:5000 interface = internal auth_type = password cafile = /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem ''[oslo_concurrency]'' lock_path = /var/lib/watcher/tmp ''[watcher_datasources]'' datasources = prometheus ''[cache]'' backend = oslo_cache.memcache_pool memcache_servers=memcached-0.memcached.watcher-kuttl-default.svc:11212 memcache_socket_timeout = 0.5 memcache_pool_connection_get_timeout = 1 enabled=true tls_enabled=true tls_certfile=/etc/pki/tls/certs/mtls.crt tls_keyfile=/etc/pki/tls/private/mtls.key tls_cafile=/etc/pki/tls/certs/mtls-ca.crt memcache_dead_retry = 30 ''[prometheus_client]'' host = metric-storage-prometheus.watcher-kuttl-default.svc port = 9090 cafile = /etc/pki/ca-trust/extracted/pem/prometheus/internal-ca-bundle.pem ''[cinder_client]'' endpoint_type = internal ''[glance_client]'' endpoint_type = internal ''[ironic_client]'' endpoint_type = internal ''[keystone_client]'' interface = internal ''[neutron_client]'' endpoint_type = internal ''[nova_client]'' endpoint_type = internal ''[placement_client]'' interface = internal ''[watcher_cluster_data_model_collectors.compute]'' period = 900 ''[watcher_cluster_data_model_collectors.baremetal]'' period = 900 ''[watcher_cluster_data_model_collectors.storage]'' period = 900' - ' logger.go:42: 07:12:50 | watcher/4-deploy-with-precreated-account | + ''['' 1 == 1 '']''' - ' logger.go:42: 07:12:50 | watcher/4-deploy-with-precreated-account | ++ grep -czPo ''TimeOut 80''' - ' logger.go:42: 07:12:50 | watcher/4-deploy-with-precreated-account | +++ oc rsh -c watcher-api watcher-kuttl-api-0 cat /etc/httpd/conf.d/10-watcher-wsgi-main.conf' - ' logger.go:42: 07:12:50 | watcher/4-deploy-with-precreated-account | ++ echo ''#'' internal vhost watcher-internal.watcher-kuttl-default.svc configuration '''' ServerName watcher-internal.watcher-kuttl-default.svc ''##'' Vhost docroot DocumentRoot ''"/var/www/cgi-bin"'' ''#'' Set the timeout for the watcher-api TimeOut 80 ''##'' Directories, there should at least be a declaration for /var/www/cgi-bin '''' Options -Indexes +FollowSymLinks +MultiViews AllowOverride None Require all granted '''' ''##'' Logging ErrorLog /dev/stdout ServerSignature Off CustomLog /dev/stdout combined ''env=!forwarded'' CustomLog /dev/stdout proxy env=forwarded ''##'' set watcher log level to debug LogLevel debug ''##'' WSGI configuration WSGIApplicationGroup ''%{GLOBAL}'' WSGIDaemonProcess internal display-name=internal group=watcher processes=2 threads=1 user=watcher WSGIProcessGroup internal WSGIScriptAlias / ''"/usr/bin/watcher-api-wsgi"'' '''' ''#'' public vhost watcher-public.watcher-kuttl-default.svc configuration '''' ServerName watcher-public.watcher-kuttl-default.svc ''##'' Vhost docroot DocumentRoot ''"/var/www/cgi-bin"'' ''#'' Set the timeout for the watcher-api TimeOut 80 ''##'' Directories, there should at least be a declaration for /var/www/cgi-bin '''' Options -Indexes +FollowSymLinks +MultiViews AllowOverride None Require all granted '''' ''##'' Logging ErrorLog /dev/stdout ServerSignature Off CustomLog /dev/stdout combined ''env=!forwarded'' CustomLog /dev/stdout proxy env=forwarded ''##'' set watcher log level to debug LogLevel debug ''##'' WSGI configuration WSGIApplicationGroup ''%{GLOBAL}'' WSGIDaemonProcess public display-name=public group=watcher processes=2 threads=1 user=watcher WSGIProcessGroup public WSGIScriptAlias / ''"/usr/bin/watcher-api-wsgi"'' ''''' - ' logger.go:42: 07:12:50 | watcher/4-deploy-with-precreated-account | + ''['' 1 == 1 '']''' - ' logger.go:42: 07:12:51 | watcher/4-deploy-with-precreated-account | running command: [sh -c set -euxo pipefail' - ' oc project watcher-kuttl-default' - ' APIPOD=$(oc get pods -n watcher-kuttl-default -l "service=watcher-api" -ocustom-columns=:metadata.name|grep -v ^$|head -1)' - ' if [ -n "${APIPOD}" ]; then' - ' [ $(echo $(oc rsh -c watcher-api ${APIPOD} cat /etc/watcher/watcher.conf.d/01-global-custom.conf) |grep -c "^# Global config") == 1 ]' - ' [ $(echo $(oc rsh -c watcher-api ${APIPOD} cat /etc/watcher/watcher.conf.d/02-service-custom.conf) |grep -c "^# Service config") == 1 ]' - ' [ $(echo $(oc rsh -c watcher-api ${APIPOD} cat /etc/watcher/watcher.conf.d/00-default.conf) |grep -czPo ''\[prometheus_client\]\s+host\s+=\s+metric-storage-prometheus.watcher-kuttl-default.svc\s+port\s+=\s+9090\s+cafile\s+=\s+/etc/pki/ca-trust/extracted/pem/prometheus/internal-ca-bundle.pem'') == 1 ]' - ' [ $(echo $(oc rsh -c watcher-api ${APIPOD} cat /etc/httpd/conf.d/10-watcher-wsgi-main.conf) |grep -czPo ''TimeOut 80'') == 1 ]' - ' else' - ' exit 1' - ' fi' - ' ]' - ' logger.go:42: 07:12:51 | watcher/4-deploy-with-precreated-account | + oc project watcher-kuttl-default' - ' logger.go:42: 07:12:51 | watcher/4-deploy-with-precreated-account | Already on project "watcher-kuttl-default" on server "https://api.crc.testing:6443".' - ' logger.go:42: 07:12:51 | watcher/4-deploy-with-precreated-account | ++ oc get pods -n watcher-kuttl-default -l service=watcher-api -ocustom-columns=:metadata.name' - ' logger.go:42: 07:12:51 | watcher/4-deploy-with-precreated-account | ++ head -1' - ' logger.go:42: 07:12:51 | watcher/4-deploy-with-precreated-account | ++ grep -v ''^$''' - ' logger.go:42: 07:12:51 | watcher/4-deploy-with-precreated-account | + APIPOD=watcher-kuttl-api-0' - ' logger.go:42: 07:12:51 | watcher/4-deploy-with-precreated-account | + ''['' -n watcher-kuttl-api-0 '']''' - ' logger.go:42: 07:12:51 | watcher/4-deploy-with-precreated-account | ++ grep -c ''^# Global config''' - ' logger.go:42: 07:12:51 | watcher/4-deploy-with-precreated-account | +++ oc rsh -c watcher-api watcher-kuttl-api-0 cat /etc/watcher/watcher.conf.d/01-global-custom.conf' - ' logger.go:42: 07:12:52 | watcher/4-deploy-with-precreated-account | ++ echo ''#'' Global config' - ' logger.go:42: 07:12:52 | watcher/4-deploy-with-precreated-account | + ''['' 1 == 1 '']''' - ' logger.go:42: 07:12:52 | watcher/4-deploy-with-precreated-account | +++ oc rsh -c watcher-api watcher-kuttl-api-0 cat /etc/watcher/watcher.conf.d/02-service-custom.conf' - ' logger.go:42: 07:12:52 | watcher/4-deploy-with-precreated-account | ++ grep -c ''^# Service config''' - ' logger.go:42: 07:12:52 | watcher/4-deploy-with-precreated-account | ++ echo ''#'' Service config' - ' logger.go:42: 07:12:52 | watcher/4-deploy-with-precreated-account | + ''['' 1 == 1 '']''' - ' logger.go:42: 07:12:52 | watcher/4-deploy-with-precreated-account | ++ grep -czPo ''\[prometheus_client\]\s+host\s+=\s+metric-storage-prometheus.watcher-kuttl-default.svc\s+port\s+=\s+9090\s+cafile\s+=\s+/etc/pki/ca-trust/extracted/pem/prometheus/internal-ca-bundle.pem''' - ' logger.go:42: 07:12:52 | watcher/4-deploy-with-precreated-account | +++ oc rsh -c watcher-api watcher-kuttl-api-0 cat /etc/watcher/watcher.conf.d/00-default.conf' - ' logger.go:42: 07:12:52 | watcher/4-deploy-with-precreated-account | ++ echo ''[DEFAULT]'' state_path = /var/lib/watcher transport_url = ''rabbit://**********=1'' control_exchange = watcher debug = True log_file = /var/log/watcher/watcher-kuttl-api.log ''#'' empty notification_level means that no notification will be sent notification_level = ''[database]'' connection = ''mysql+pymysql://watcher_test:5de8c0dffb799bdb4516f938bcffc35d@openstack.watcher-kuttl-default.svc/watcher?read_default_file=/etc/my.cnf'' ''[oslo_policy]'' policy_file = /etc/watcher/policy.yaml.sample ''[oslo_messaging_notifications]'' driver = noop ''[oslo_messaging_rabbit]'' rabbit_quorum_queue=true rabbit_transient_quorum_queue=true amqp_durable_queues=true ''[keystone_authtoken]'' memcached_servers=memcached-0.memcached.watcher-kuttl-default.svc:11212 tls_enabled=true memcache_tls_certfile = /etc/pki/tls/certs/mtls.crt memcache_tls_keyfile = /etc/pki/tls/private/mtls.key memcache_tls_cafile = /etc/pki/tls/certs/mtls-ca.crt memcache_tls_enabled = true project_domain_name = Default project_name = service user_domain_name = Default password = pa**********rd username = watcher auth_url = https://keystone-internal.watcher-kuttl-default.svc:5000 interface = internal auth_type = password cafile = /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem ''[watcher_clients_auth]'' project_domain_name = Default project_name = service user_domain_name = Default password = pa**********rd username = watcher auth_url = https://keystone-internal.watcher-kuttl-default.svc:5000 interface = internal auth_type = password cafile = /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem ''[oslo_concurrency]'' lock_path = /var/lib/watcher/tmp ''[watcher_datasources]'' datasources = prometheus ''[cache]'' backend = oslo_cache.memcache_pool memcache_servers=memcached-0.memcached.watcher-kuttl-default.svc:11212 memcache_socket_timeout = 0.5 memcache_pool_connection_get_timeout = 1 enabled=true tls_enabled=true tls_certfile=/etc/pki/tls/certs/mtls.crt tls_keyfile=/etc/pki/tls/private/mtls.key tls_cafile=/etc/pki/tls/certs/mtls-ca.crt memcache_dead_retry = 30 ''[prometheus_client]'' host = metric-storage-prometheus.watcher-kuttl-default.svc port = 9090 cafile = /etc/pki/ca-trust/extracted/pem/prometheus/internal-ca-bundle.pem ''[cinder_client]'' endpoint_type = internal ''[glance_client]'' endpoint_type = internal ''[ironic_client]'' endpoint_type = internal ''[keystone_client]'' interface = internal ''[neutron_client]'' endpoint_type = internal ''[nova_client]'' endpoint_type = internal ''[placement_client]'' interface = internal ''[watcher_cluster_data_model_collectors.compute]'' period = 900 ''[watcher_cluster_data_model_collectors.baremetal]'' period = 900 ''[watcher_cluster_data_model_collectors.storage]'' period = 900' - ' logger.go:42: 07:12:52 | watcher/4-deploy-with-precreated-account | + ''['' 1 == 1 '']''' - ' logger.go:42: 07:12:52 | watcher/4-deploy-with-precreated-account | ++ grep -czPo ''TimeOut 80''' - ' logger.go:42: 07:12:52 | watcher/4-deploy-with-precreated-account | +++ oc rsh -c watcher-api watcher-kuttl-api-0 cat /etc/httpd/conf.d/10-watcher-wsgi-main.conf' - ' logger.go:42: 07:12:52 | watcher/4-deploy-with-precreated-account | ++ echo ''#'' internal vhost watcher-internal.watcher-kuttl-default.svc configuration '''' ServerName watcher-internal.watcher-kuttl-default.svc ''##'' Vhost docroot DocumentRoot ''"/var/www/cgi-bin"'' ''#'' Set the timeout for the watcher-api TimeOut 80 ''##'' Directories, there should at least be a declaration for /var/www/cgi-bin '''' Options -Indexes +FollowSymLinks +MultiViews AllowOverride None Require all granted '''' ''##'' Logging ErrorLog /dev/stdout ServerSignature Off CustomLog /dev/stdout combined ''env=!forwarded'' CustomLog /dev/stdout proxy env=forwarded ''##'' set watcher log level to debug LogLevel debug ''##'' WSGI configuration WSGIApplicationGroup ''%{GLOBAL}'' WSGIDaemonProcess internal display-name=internal group=watcher processes=2 threads=1 user=watcher WSGIProcessGroup internal WSGIScriptAlias / ''"/usr/bin/watcher-api-wsgi"'' '''' ''#'' public vhost watcher-public.watcher-kuttl-default.svc configuration '''' ServerName watcher-public.watcher-kuttl-default.svc ''##'' Vhost docroot DocumentRoot ''"/var/www/cgi-bin"'' ''#'' Set the timeout for the watcher-api TimeOut 80 ''##'' Directories, there should at least be a declaration for /var/www/cgi-bin '''' Options -Indexes +FollowSymLinks +MultiViews AllowOverride None Require all granted '''' ''##'' Logging ErrorLog /dev/stdout ServerSignature Off CustomLog /dev/stdout combined ''env=!forwarded'' CustomLog /dev/stdout proxy env=forwarded ''##'' set watcher log level to debug LogLevel debug ''##'' WSGI configuration WSGIApplicationGroup ''%{GLOBAL}'' WSGIDaemonProcess public display-name=public group=watcher processes=2 threads=1 user=watcher WSGIProcessGroup public WSGIScriptAlias / ''"/usr/bin/watcher-api-wsgi"'' ''''' - ' logger.go:42: 07:12:52 | watcher/4-deploy-with-precreated-account | + ''['' 1 == 1 '']''' - ' logger.go:42: 07:12:53 | watcher/4-deploy-with-precreated-account | running command: [sh -c set -euxo pipefail' - ' oc project watcher-kuttl-default' - ' APIPOD=$(oc get pods -n watcher-kuttl-default -l "service=watcher-api" -ocustom-columns=:metadata.name|grep -v ^$|head -1)' - ' if [ -n "${APIPOD}" ]; then' - ' [ $(echo $(oc rsh -c watcher-api ${APIPOD} cat /etc/watcher/watcher.conf.d/01-global-custom.conf) |grep -c "^# Global config") == 1 ]' - ' [ $(echo $(oc rsh -c watcher-api ${APIPOD} cat /etc/watcher/watcher.conf.d/02-service-custom.conf) |grep -c "^# Service config") == 1 ]' - ' [ $(echo $(oc rsh -c watcher-api ${APIPOD} cat /etc/watcher/watcher.conf.d/00-default.conf) |grep -czPo ''\[prometheus_client\]\s+host\s+=\s+metric-storage-prometheus.watcher-kuttl-default.svc\s+port\s+=\s+9090\s+cafile\s+=\s+/etc/pki/ca-trust/extracted/pem/prometheus/internal-ca-bundle.pem'') == 1 ]' - ' [ $(echo $(oc rsh -c watcher-api ${APIPOD} cat /etc/httpd/conf.d/10-watcher-wsgi-main.conf) |grep -czPo ''TimeOut 80'') == 1 ]' - ' else' - ' exit 1' - ' fi' - ' ]' - ' logger.go:42: 07:12:53 | watcher/4-deploy-with-precreated-account | + oc project watcher-kuttl-default' - ' logger.go:42: 07:12:54 | watcher/4-deploy-with-precreated-account | Already on project "watcher-kuttl-default" on server "https://api.crc.testing:6443".' - ' logger.go:42: 07:12:54 | watcher/4-deploy-with-precreated-account | ++ oc get pods -n watcher-kuttl-default -l service=watcher-api -ocustom-columns=:metadata.name' - ' logger.go:42: 07:12:54 | watcher/4-deploy-with-precreated-account | ++ head -1' - ' logger.go:42: 07:12:54 | watcher/4-deploy-with-precreated-account | ++ grep -v ''^$''' - ' logger.go:42: 07:12:54 | watcher/4-deploy-with-precreated-account | + APIPOD=watcher-kuttl-api-0' - ' logger.go:42: 07:12:54 | watcher/4-deploy-with-precreated-account | + ''['' -n watcher-kuttl-api-0 '']''' - ' logger.go:42: 07:12:54 | watcher/4-deploy-with-precreated-account | ++ grep -c ''^# Global config''' - ' logger.go:42: 07:12:54 | watcher/4-deploy-with-precreated-account | +++ oc rsh -c watcher-api watcher-kuttl-api-0 cat /etc/watcher/watcher.conf.d/01-global-custom.conf' - ' logger.go:42: 07:12:54 | watcher/4-deploy-with-precreated-account | ++ echo ''#'' Global config' - ' logger.go:42: 07:12:54 | watcher/4-deploy-with-precreated-account | + ''['' 1 == 1 '']''' - ' logger.go:42: 07:12:54 | watcher/4-deploy-with-precreated-account | ++ grep -c ''^# Service config''' - ' logger.go:42: 07:12:54 | watcher/4-deploy-with-precreated-account | +++ oc rsh -c watcher-api watcher-kuttl-api-0 cat /etc/watcher/watcher.conf.d/02-service-custom.conf' - ' logger.go:42: 07:12:54 | watcher/4-deploy-with-precreated-account | ++ echo ''#'' Service config' - ' logger.go:42: 07:12:54 | watcher/4-deploy-with-precreated-account | + ''['' 1 == 1 '']''' - ' logger.go:42: 07:12:54 | watcher/4-deploy-with-precreated-account | ++ grep -czPo ''\[prometheus_client\]\s+host\s+=\s+metric-storage-prometheus.watcher-kuttl-default.svc\s+port\s+=\s+9090\s+cafile\s+=\s+/etc/pki/ca-trust/extracted/pem/prometheus/internal-ca-bundle.pem''' - ' logger.go:42: 07:12:54 | watcher/4-deploy-with-precreated-account | +++ oc rsh -c watcher-api watcher-kuttl-api-0 cat /etc/watcher/watcher.conf.d/00-default.conf' - ' logger.go:42: 07:12:54 | watcher/4-deploy-with-precreated-account | ++ echo ''[DEFAULT]'' state_path = /var/lib/watcher transport_url = ''rabbit://**********=1'' control_exchange = watcher debug = True log_file = /var/log/watcher/watcher-kuttl-api.log ''#'' empty notification_level means that no notification will be sent notification_level = ''[database]'' connection = ''mysql+pymysql://watcher_test:5de8c0dffb799bdb4516f938bcffc35d@openstack.watcher-kuttl-default.svc/watcher?read_default_file=/etc/my.cnf'' ''[oslo_policy]'' policy_file = /etc/watcher/policy.yaml.sample ''[oslo_messaging_notifications]'' driver = noop ''[oslo_messaging_rabbit]'' rabbit_quorum_queue=true rabbit_transient_quorum_queue=true amqp_durable_queues=true ''[keystone_authtoken]'' memcached_servers=memcached-0.memcached.watcher-kuttl-default.svc:11212 tls_enabled=true memcache_tls_certfile = /etc/pki/tls/certs/mtls.crt memcache_tls_keyfile = /etc/pki/tls/private/mtls.key memcache_tls_cafile = /etc/pki/tls/certs/mtls-ca.crt memcache_tls_enabled = true project_domain_name = Default project_name = service user_domain_name = Default password = pa**********rd username = watcher auth_url = https://keystone-internal.watcher-kuttl-default.svc:5000 interface = internal auth_type = password cafile = /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem ''[watcher_clients_auth]'' project_domain_name = Default project_name = service user_domain_name = Default password = pa**********rd username = watcher auth_url = https://keystone-internal.watcher-kuttl-default.svc:5000 interface = internal auth_type = password cafile = /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem ''[oslo_concurrency]'' lock_path = /var/lib/watcher/tmp ''[watcher_datasources]'' datasources = prometheus ''[cache]'' backend = oslo_cache.memcache_pool memcache_servers=memcached-0.memcached.watcher-kuttl-default.svc:11212 memcache_socket_timeout = 0.5 memcache_pool_connection_get_timeout = 1 enabled=true tls_enabled=true tls_certfile=/etc/pki/tls/certs/mtls.crt tls_keyfile=/etc/pki/tls/private/mtls.key tls_cafile=/etc/pki/tls/certs/mtls-ca.crt memcache_dead_retry = 30 ''[prometheus_client]'' host = metric-storage-prometheus.watcher-kuttl-default.svc port = 9090 cafile = /etc/pki/ca-trust/extracted/pem/prometheus/internal-ca-bundle.pem ''[cinder_client]'' endpoint_type = internal ''[glance_client]'' endpoint_type = internal ''[ironic_client]'' endpoint_type = internal ''[keystone_client]'' interface = internal ''[neutron_client]'' endpoint_type = internal ''[nova_client]'' endpoint_type = internal ''[placement_client]'' interface = internal ''[watcher_cluster_data_model_collectors.compute]'' period = 900 ''[watcher_cluster_data_model_collectors.baremetal]'' period = 900 ''[watcher_cluster_data_model_collectors.storage]'' period = 900' - ' logger.go:42: 07:12:54 | watcher/4-deploy-with-precreated-account | + ''['' 1 == 1 '']''' - ' logger.go:42: 07:12:54 | watcher/4-deploy-with-precreated-account | ++ grep -czPo ''TimeOut 80''' - ' logger.go:42: 07:12:54 | watcher/4-deploy-with-precreated-account | +++ oc rsh -c watcher-api watcher-kuttl-api-0 cat /etc/httpd/conf.d/10-watcher-wsgi-main.conf' - ' logger.go:42: 07:12:55 | watcher/4-deploy-with-precreated-account | ++ echo ''#'' internal vhost watcher-internal.watcher-kuttl-default.svc configuration '''' ServerName watcher-internal.watcher-kuttl-default.svc ''##'' Vhost docroot DocumentRoot ''"/var/www/cgi-bin"'' ''#'' Set the timeout for the watcher-api TimeOut 80 ''##'' Directories, there should at least be a declaration for /var/www/cgi-bin '''' Options -Indexes +FollowSymLinks +MultiViews AllowOverride None Require all granted '''' ''##'' Logging ErrorLog /dev/stdout ServerSignature Off CustomLog /dev/stdout combined ''env=!forwarded'' CustomLog /dev/stdout proxy env=forwarded ''##'' set watcher log level to debug LogLevel debug ''##'' WSGI configuration WSGIApplicationGroup ''%{GLOBAL}'' WSGIDaemonProcess internal display-name=internal group=watcher processes=2 threads=1 user=watcher WSGIProcessGroup internal WSGIScriptAlias / ''"/usr/bin/watcher-api-wsgi"'' '''' ''#'' public vhost watcher-public.watcher-kuttl-default.svc configuration '''' ServerName watcher-public.watcher-kuttl-default.svc ''##'' Vhost docroot DocumentRoot ''"/var/www/cgi-bin"'' ''#'' Set the timeout for the watcher-api TimeOut 80 ''##'' Directories, there should at least be a declaration for /var/www/cgi-bin '''' Options -Indexes +FollowSymLinks +MultiViews AllowOverride None Require all granted '''' ''##'' Logging ErrorLog /dev/stdout ServerSignature Off CustomLog /dev/stdout combined ''env=!forwarded'' CustomLog /dev/stdout proxy env=forwarded ''##'' set watcher log level to debug LogLevel debug ''##'' WSGI configuration WSGIApplicationGroup ''%{GLOBAL}'' WSGIDaemonProcess public display-name=public group=watcher processes=2 threads=1 user=watcher WSGIProcessGroup public WSGIScriptAlias / ''"/usr/bin/watcher-api-wsgi"'' ''''' - ' logger.go:42: 07:12:55 | watcher/4-deploy-with-precreated-account | + ''['' 1 == 1 '']''' - ' logger.go:42: 07:12:56 | watcher/4-deploy-with-precreated-account | running command: [sh -c set -euxo pipefail' - ' oc project watcher-kuttl-default' - ' APIPOD=$(oc get pods -n watcher-kuttl-default -l "service=watcher-api" -ocustom-columns=:metadata.name|grep -v ^$|head -1)' - ' if [ -n "${APIPOD}" ]; then' - ' [ $(echo $(oc rsh -c watcher-api ${APIPOD} cat /etc/watcher/watcher.conf.d/01-global-custom.conf) |grep -c "^# Global config") == 1 ]' - ' [ $(echo $(oc rsh -c watcher-api ${APIPOD} cat /etc/watcher/watcher.conf.d/02-service-custom.conf) |grep -c "^# Service config") == 1 ]' - ' [ $(echo $(oc rsh -c watcher-api ${APIPOD} cat /etc/watcher/watcher.conf.d/00-default.conf) |grep -czPo ''\[prometheus_client\]\s+host\s+=\s+metric-storage-prometheus.watcher-kuttl-default.svc\s+port\s+=\s+9090\s+cafile\s+=\s+/etc/pki/ca-trust/extracted/pem/prometheus/internal-ca-bundle.pem'') == 1 ]' - ' [ $(echo $(oc rsh -c watcher-api ${APIPOD} cat /etc/httpd/conf.d/10-watcher-wsgi-main.conf) |grep -czPo ''TimeOut 80'') == 1 ]' - ' else' - ' exit 1' - ' fi' - ' ]' - ' logger.go:42: 07:12:56 | watcher/4-deploy-with-precreated-account | + oc project watcher-kuttl-default' - ' logger.go:42: 07:12:56 | watcher/4-deploy-with-precreated-account | Already on project "watcher-kuttl-default" on server "https://api.crc.testing:6443".' - ' logger.go:42: 07:12:56 | watcher/4-deploy-with-precreated-account | ++ oc get pods -n watcher-kuttl-default -l service=watcher-api -ocustom-columns=:metadata.name' - ' logger.go:42: 07:12:56 | watcher/4-deploy-with-precreated-account | ++ head -1' - ' logger.go:42: 07:12:56 | watcher/4-deploy-with-precreated-account | ++ grep -v ''^$''' - ' logger.go:42: 07:12:56 | watcher/4-deploy-with-precreated-account | + APIPOD=watcher-kuttl-api-0' - ' logger.go:42: 07:12:56 | watcher/4-deploy-with-precreated-account | + ''['' -n watcher-kuttl-api-0 '']''' - ' logger.go:42: 07:12:56 | watcher/4-deploy-with-precreated-account | ++ grep -c ''^# Global config''' - ' logger.go:42: 07:12:56 | watcher/4-deploy-with-precreated-account | +++ oc rsh -c watcher-api watcher-kuttl-api-0 cat /etc/watcher/watcher.conf.d/01-global-custom.conf' - ' logger.go:42: 07:12:56 | watcher/4-deploy-with-precreated-account | ++ echo ''#'' Global config' - ' logger.go:42: 07:12:56 | watcher/4-deploy-with-precreated-account | + ''['' 1 == 1 '']''' - ' logger.go:42: 07:12:56 | watcher/4-deploy-with-precreated-account | ++ grep -c ''^# Service config''' - ' logger.go:42: 07:12:56 | watcher/4-deploy-with-precreated-account | +++ oc rsh -c watcher-api watcher-kuttl-api-0 cat /etc/watcher/watcher.conf.d/02-service-custom.conf' - ' logger.go:42: 07:12:57 | watcher/4-deploy-with-precreated-account | ++ echo ''#'' Service config' - ' logger.go:42: 07:12:57 | watcher/4-deploy-with-precreated-account | + ''['' 1 == 1 '']''' - ' logger.go:42: 07:12:57 | watcher/4-deploy-with-precreated-account | ++ grep -czPo ''\[prometheus_client\]\s+host\s+=\s+metric-storage-prometheus.watcher-kuttl-default.svc\s+port\s+=\s+9090\s+cafile\s+=\s+/etc/pki/ca-trust/extracted/pem/prometheus/internal-ca-bundle.pem''' - ' logger.go:42: 07:12:57 | watcher/4-deploy-with-precreated-account | +++ oc rsh -c watcher-api watcher-kuttl-api-0 cat /etc/watcher/watcher.conf.d/00-default.conf' - ' logger.go:42: 07:12:57 | watcher/4-deploy-with-precreated-account | ++ echo ''[DEFAULT]'' state_path = /var/lib/watcher transport_url = ''rabbit://**********=1'' control_exchange = watcher debug = True log_file = /var/log/watcher/watcher-kuttl-api.log ''#'' empty notification_level means that no notification will be sent notification_level = ''[database]'' connection = ''mysql+pymysql://watcher_test:5de8c0dffb799bdb4516f938bcffc35d@openstack.watcher-kuttl-default.svc/watcher?read_default_file=/etc/my.cnf'' ''[oslo_policy]'' policy_file = /etc/watcher/policy.yaml.sample ''[oslo_messaging_notifications]'' driver = noop ''[oslo_messaging_rabbit]'' rabbit_quorum_queue=true rabbit_transient_quorum_queue=true amqp_durable_queues=true ''[keystone_authtoken]'' memcached_servers=memcached-0.memcached.watcher-kuttl-default.svc:11212 tls_enabled=true memcache_tls_certfile = /etc/pki/tls/certs/mtls.crt memcache_tls_keyfile = /etc/pki/tls/private/mtls.key memcache_tls_cafile = /etc/pki/tls/certs/mtls-ca.crt memcache_tls_enabled = true project_domain_name = Default project_name = service user_domain_name = Default password = pa**********rd username = watcher auth_url = https://keystone-internal.watcher-kuttl-default.svc:5000 interface = internal auth_type = password cafile = /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem ''[watcher_clients_auth]'' project_domain_name = Default project_name = service user_domain_name = Default password = pa**********rd username = watcher auth_url = https://keystone-internal.watcher-kuttl-default.svc:5000 interface = internal auth_type = password cafile = /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem ''[oslo_concurrency]'' lock_path = /var/lib/watcher/tmp ''[watcher_datasources]'' datasources = prometheus ''[cache]'' backend = oslo_cache.memcache_pool memcache_servers=memcached-0.memcached.watcher-kuttl-default.svc:11212 memcache_socket_timeout = 0.5 memcache_pool_connection_get_timeout = 1 enabled=true tls_enabled=true tls_certfile=/etc/pki/tls/certs/mtls.crt tls_keyfile=/etc/pki/tls/private/mtls.key tls_cafile=/etc/pki/tls/certs/mtls-ca.crt memcache_dead_retry = 30 ''[prometheus_client]'' host = metric-storage-prometheus.watcher-kuttl-default.svc port = 9090 cafile = /etc/pki/ca-trust/extracted/pem/prometheus/internal-ca-bundle.pem ''[cinder_client]'' endpoint_type = internal ''[glance_client]'' endpoint_type = internal ''[ironic_client]'' endpoint_type = internal ''[keystone_client]'' interface = internal ''[neutron_client]'' endpoint_type = internal ''[nova_client]'' endpoint_type = internal ''[placement_client]'' interface = internal ''[watcher_cluster_data_model_collectors.compute]'' period = 900 ''[watcher_cluster_data_model_collectors.baremetal]'' period = 900 ''[watcher_cluster_data_model_collectors.storage]'' period = 900' - ' logger.go:42: 07:12:57 | watcher/4-deploy-with-precreated-account | + ''['' 1 == 1 '']''' - ' logger.go:42: 07:12:57 | watcher/4-deploy-with-precreated-account | ++ grep -czPo ''TimeOut 80''' - ' logger.go:42: 07:12:57 | watcher/4-deploy-with-precreated-account | +++ oc rsh -c watcher-api watcher-kuttl-api-0 cat /etc/httpd/conf.d/10-watcher-wsgi-main.conf' - ' logger.go:42: 07:12:57 | watcher/4-deploy-with-precreated-account | ++ echo ''#'' internal vhost watcher-internal.watcher-kuttl-default.svc configuration '''' ServerName watcher-internal.watcher-kuttl-default.svc ''##'' Vhost docroot DocumentRoot ''"/var/www/cgi-bin"'' ''#'' Set the timeout for the watcher-api TimeOut 80 ''##'' Directories, there should at least be a declaration for /var/www/cgi-bin '''' Options -Indexes +FollowSymLinks +MultiViews AllowOverride None Require all granted '''' ''##'' Logging ErrorLog /dev/stdout ServerSignature Off CustomLog /dev/stdout combined ''env=!forwarded'' CustomLog /dev/stdout proxy env=forwarded ''##'' set watcher log level to debug LogLevel debug ''##'' WSGI configuration WSGIApplicationGroup ''%{GLOBAL}'' WSGIDaemonProcess internal display-name=internal group=watcher processes=2 threads=1 user=watcher WSGIProcessGroup internal WSGIScriptAlias / ''"/usr/bin/watcher-api-wsgi"'' '''' ''#'' public vhost watcher-public.watcher-kuttl-default.svc configuration '''' ServerName watcher-public.watcher-kuttl-default.svc ''##'' Vhost docroot DocumentRoot ''"/var/www/cgi-bin"'' ''#'' Set the timeout for the watcher-api TimeOut 80 ''##'' Directories, there should at least be a declaration for /var/www/cgi-bin '''' Options -Indexes +FollowSymLinks +MultiViews AllowOverride None Require all granted '''' ''##'' Logging ErrorLog /dev/stdout ServerSignature Off CustomLog /dev/stdout combined ''env=!forwarded'' CustomLog /dev/stdout proxy env=forwarded ''##'' set watcher log level to debug LogLevel debug ''##'' WSGI configuration WSGIApplicationGroup ''%{GLOBAL}'' WSGIDaemonProcess public display-name=public group=watcher processes=2 threads=1 user=watcher WSGIProcessGroup public WSGIScriptAlias / ''"/usr/bin/watcher-api-wsgi"'' ''''' - ' logger.go:42: 07:12:57 | watcher/4-deploy-with-precreated-account | + ''['' 1 == 1 '']''' - ' logger.go:42: 07:12:58 | watcher/4-deploy-with-precreated-account | running command: [sh -c set -euxo pipefail' - ' oc project watcher-kuttl-default' - ' APIPOD=$(oc get pods -n watcher-kuttl-default -l "service=watcher-api" -ocustom-columns=:metadata.name|grep -v ^$|head -1)' - ' if [ -n "${APIPOD}" ]; then' - ' [ $(echo $(oc rsh -c watcher-api ${APIPOD} cat /etc/watcher/watcher.conf.d/01-global-custom.conf) |grep -c "^# Global config") == 1 ]' - ' [ $(echo $(oc rsh -c watcher-api ${APIPOD} cat /etc/watcher/watcher.conf.d/02-service-custom.conf) |grep -c "^# Service config") == 1 ]' - ' [ $(echo $(oc rsh -c watcher-api ${APIPOD} cat /etc/watcher/watcher.conf.d/00-default.conf) |grep -czPo ''\[prometheus_client\]\s+host\s+=\s+metric-storage-prometheus.watcher-kuttl-default.svc\s+port\s+=\s+9090\s+cafile\s+=\s+/etc/pki/ca-trust/extracted/pem/prometheus/internal-ca-bundle.pem'') == 1 ]' - ' [ $(echo $(oc rsh -c watcher-api ${APIPOD} cat /etc/httpd/conf.d/10-watcher-wsgi-main.conf) |grep -czPo ''TimeOut 80'') == 1 ]' - ' else' - ' exit 1' - ' fi' - ' ]' - ' logger.go:42: 07:12:58 | watcher/4-deploy-with-precreated-account | + oc project watcher-kuttl-default' - ' logger.go:42: 07:12:58 | watcher/4-deploy-with-precreated-account | Already on project "watcher-kuttl-default" on server "https://api.crc.testing:6443".' - ' logger.go:42: 07:12:58 | watcher/4-deploy-with-precreated-account | ++ oc get pods -n watcher-kuttl-default -l service=watcher-api -ocustom-columns=:metadata.name' - ' logger.go:42: 07:12:58 | watcher/4-deploy-with-precreated-account | ++ head -1' - ' logger.go:42: 07:12:58 | watcher/4-deploy-with-precreated-account | ++ grep -v ''^$''' - ' logger.go:42: 07:12:58 | watcher/4-deploy-with-precreated-account | + APIPOD=watcher-kuttl-api-0' - ' logger.go:42: 07:12:58 | watcher/4-deploy-with-precreated-account | + ''['' -n watcher-kuttl-api-0 '']''' - ' logger.go:42: 07:12:58 | watcher/4-deploy-with-precreated-account | ++ grep -c ''^# Global config''' - ' logger.go:42: 07:12:58 | watcher/4-deploy-with-precreated-account | +++ oc rsh -c watcher-api watcher-kuttl-api-0 cat /etc/watcher/watcher.conf.d/01-global-custom.conf' - ' logger.go:42: 07:12:59 | watcher/4-deploy-with-precreated-account | ++ echo ''#'' Global config' - ' logger.go:42: 07:12:59 | watcher/4-deploy-with-precreated-account | + ''['' 1 == 1 '']''' - ' logger.go:42: 07:12:59 | watcher/4-deploy-with-precreated-account | ++ grep -c ''^# Service config''' - ' logger.go:42: 07:12:59 | watcher/4-deploy-with-precreated-account | +++ oc rsh -c watcher-api watcher-kuttl-api-0 cat /etc/watcher/watcher.conf.d/02-service-custom.conf' - ' logger.go:42: 07:12:59 | watcher/4-deploy-with-precreated-account | ++ echo ''#'' Service config' - ' logger.go:42: 07:12:59 | watcher/4-deploy-with-precreated-account | + ''['' 1 == 1 '']''' - ' logger.go:42: 07:12:59 | watcher/4-deploy-with-precreated-account | ++ grep -czPo ''\[prometheus_client\]\s+host\s+=\s+metric-storage-prometheus.watcher-kuttl-default.svc\s+port\s+=\s+9090\s+cafile\s+=\s+/etc/pki/ca-trust/extracted/pem/prometheus/internal-ca-bundle.pem''' - ' logger.go:42: 07:12:59 | watcher/4-deploy-with-precreated-account | +++ oc rsh -c watcher-api watcher-kuttl-api-0 cat /etc/watcher/watcher.conf.d/00-default.conf' - ' logger.go:42: 07:12:59 | watcher/4-deploy-with-precreated-account | ++ echo ''[DEFAULT]'' state_path = /var/lib/watcher transport_url = ''rabbit://**********=1'' control_exchange = watcher debug = True log_file = /var/log/watcher/watcher-kuttl-api.log ''#'' empty notification_level means that no notification will be sent notification_level = ''[database]'' connection = ''mysql+pymysql://watcher_test:5de8c0dffb799bdb4516f938bcffc35d@openstack.watcher-kuttl-default.svc/watcher?read_default_file=/etc/my.cnf'' ''[oslo_policy]'' policy_file = /etc/watcher/policy.yaml.sample ''[oslo_messaging_notifications]'' driver = noop ''[oslo_messaging_rabbit]'' rabbit_quorum_queue=true rabbit_transient_quorum_queue=true amqp_durable_queues=true ''[keystone_authtoken]'' memcached_servers=memcached-0.memcached.watcher-kuttl-default.svc:11212 tls_enabled=true memcache_tls_certfile = /etc/pki/tls/certs/mtls.crt memcache_tls_keyfile = /etc/pki/tls/private/mtls.key memcache_tls_cafile = /etc/pki/tls/certs/mtls-ca.crt memcache_tls_enabled = true project_domain_name = Default project_name = service user_domain_name = Default password = pa**********rd username = watcher auth_url = https://keystone-internal.watcher-kuttl-default.svc:5000 interface = internal auth_type = password cafile = /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem ''[watcher_clients_auth]'' project_domain_name = Default project_name = service user_domain_name = Default password = pa**********rd username = watcher auth_url = https://keystone-internal.watcher-kuttl-default.svc:5000 interface = internal auth_type = password cafile = /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem ''[oslo_concurrency]'' lock_path = /var/lib/watcher/tmp ''[watcher_datasources]'' datasources = prometheus ''[cache]'' backend = oslo_cache.memcache_pool memcache_servers=memcached-0.memcached.watcher-kuttl-default.svc:11212 memcache_socket_timeout = 0.5 memcache_pool_connection_get_timeout = 1 enabled=true tls_enabled=true tls_certfile=/etc/pki/tls/certs/mtls.crt tls_keyfile=/etc/pki/tls/private/mtls.key tls_cafile=/etc/pki/tls/certs/mtls-ca.crt memcache_dead_retry = 30 ''[prometheus_client]'' host = metric-storage-prometheus.watcher-kuttl-default.svc port = 9090 cafile = /etc/pki/ca-trust/extracted/pem/prometheus/internal-ca-bundle.pem ''[cinder_client]'' endpoint_type = internal ''[glance_client]'' endpoint_type = internal ''[ironic_client]'' endpoint_type = internal ''[keystone_client]'' interface = internal ''[neutron_client]'' endpoint_type = internal ''[nova_client]'' endpoint_type = internal ''[placement_client]'' interface = internal ''[watcher_cluster_data_model_collectors.compute]'' period = 900 ''[watcher_cluster_data_model_collectors.baremetal]'' period = 900 ''[watcher_cluster_data_model_collectors.storage]'' period = 900' - ' logger.go:42: 07:12:59 | watcher/4-deploy-with-precreated-account | + ''['' 1 == 1 '']''' - ' logger.go:42: 07:12:59 | watcher/4-deploy-with-precreated-account | ++ grep -czPo ''TimeOut 80''' - ' logger.go:42: 07:12:59 | watcher/4-deploy-with-precreated-account | +++ oc rsh -c watcher-api watcher-kuttl-api-0 cat /etc/httpd/conf.d/10-watcher-wsgi-main.conf' - ' logger.go:42: 07:12:59 | watcher/4-deploy-with-precreated-account | ++ echo ''#'' internal vhost watcher-internal.watcher-kuttl-default.svc configuration '''' ServerName watcher-internal.watcher-kuttl-default.svc ''##'' Vhost docroot DocumentRoot ''"/var/www/cgi-bin"'' ''#'' Set the timeout for the watcher-api TimeOut 80 ''##'' Directories, there should at least be a declaration for /var/www/cgi-bin '''' Options -Indexes +FollowSymLinks +MultiViews AllowOverride None Require all granted '''' ''##'' Logging ErrorLog /dev/stdout ServerSignature Off CustomLog /dev/stdout combined ''env=!forwarded'' CustomLog /dev/stdout proxy env=forwarded ''##'' set watcher log level to debug LogLevel debug ''##'' WSGI configuration WSGIApplicationGroup ''%{GLOBAL}'' WSGIDaemonProcess internal display-name=internal group=watcher processes=2 threads=1 user=watcher WSGIProcessGroup internal WSGIScriptAlias / ''"/usr/bin/watcher-api-wsgi"'' '''' ''#'' public vhost watcher-public.watcher-kuttl-default.svc configuration '''' ServerName watcher-public.watcher-kuttl-default.svc ''##'' Vhost docroot DocumentRoot ''"/var/www/cgi-bin"'' ''#'' Set the timeout for the watcher-api TimeOut 80 ''##'' Directories, there should at least be a declaration for /var/www/cgi-bin '''' Options -Indexes +FollowSymLinks +MultiViews AllowOverride None Require all granted '''' ''##'' Logging ErrorLog /dev/stdout ServerSignature Off CustomLog /dev/stdout combined ''env=!forwarded'' CustomLog /dev/stdout proxy env=forwarded ''##'' set watcher log level to debug LogLevel debug ''##'' WSGI configuration WSGIApplicationGroup ''%{GLOBAL}'' WSGIDaemonProcess public display-name=public group=watcher processes=2 threads=1 user=watcher WSGIProcessGroup public WSGIScriptAlias / ''"/usr/bin/watcher-api-wsgi"'' ''''' - ' logger.go:42: 07:12:59 | watcher/4-deploy-with-precreated-account | + ''['' 1 == 1 '']''' - ' logger.go:42: 07:13:00 | watcher/4-deploy-with-precreated-account | running command: [sh -c set -euxo pipefail' - ' oc project watcher-kuttl-default' - ' APIPOD=$(oc get pods -n watcher-kuttl-default -l "service=watcher-api" -ocustom-columns=:metadata.name|grep -v ^$|head -1)' - ' if [ -n "${APIPOD}" ]; then' - ' [ $(echo $(oc rsh -c watcher-api ${APIPOD} cat /etc/watcher/watcher.conf.d/01-global-custom.conf) |grep -c "^# Global config") == 1 ]' - ' [ $(echo $(oc rsh -c watcher-api ${APIPOD} cat /etc/watcher/watcher.conf.d/02-service-custom.conf) |grep -c "^# Service config") == 1 ]' - ' [ $(echo $(oc rsh -c watcher-api ${APIPOD} cat /etc/watcher/watcher.conf.d/00-default.conf) |grep -czPo ''\[prometheus_client\]\s+host\s+=\s+metric-storage-prometheus.watcher-kuttl-default.svc\s+port\s+=\s+9090\s+cafile\s+=\s+/etc/pki/ca-trust/extracted/pem/prometheus/internal-ca-bundle.pem'') == 1 ]' - ' [ $(echo $(oc rsh -c watcher-api ${APIPOD} cat /etc/httpd/conf.d/10-watcher-wsgi-main.conf) |grep -czPo ''TimeOut 80'') == 1 ]' - ' else' - ' exit 1' - ' fi' - ' ]' - ' logger.go:42: 07:13:00 | watcher/4-deploy-with-precreated-account | + oc project watcher-kuttl-default' - ' logger.go:42: 07:13:01 | watcher/4-deploy-with-precreated-account | Already on project "watcher-kuttl-default" on server "https://api.crc.testing:6443".' - ' logger.go:42: 07:13:01 | watcher/4-deploy-with-precreated-account | ++ oc get pods -n watcher-kuttl-default -l service=watcher-api -ocustom-columns=:metadata.name' - ' logger.go:42: 07:13:01 | watcher/4-deploy-with-precreated-account | ++ head -1' - ' logger.go:42: 07:13:01 | watcher/4-deploy-with-precreated-account | ++ grep -v ''^$''' - ' logger.go:42: 07:13:01 | watcher/4-deploy-with-precreated-account | + APIPOD=watcher-kuttl-api-0' - ' logger.go:42: 07:13:01 | watcher/4-deploy-with-precreated-account | + ''['' -n watcher-kuttl-api-0 '']''' - ' logger.go:42: 07:13:01 | watcher/4-deploy-with-precreated-account | ++ grep -c ''^# Global config''' - ' logger.go:42: 07:13:01 | watcher/4-deploy-with-precreated-account | +++ oc rsh -c watcher-api watcher-kuttl-api-0 cat /etc/watcher/watcher.conf.d/01-global-custom.conf' - ' logger.go:42: 07:13:01 | watcher/4-deploy-with-precreated-account | ++ echo ''#'' Global config' - ' logger.go:42: 07:13:01 | watcher/4-deploy-with-precreated-account | + ''['' 1 == 1 '']''' - ' logger.go:42: 07:13:01 | watcher/4-deploy-with-precreated-account | ++ grep -c ''^# Service config''' - ' logger.go:42: 07:13:01 | watcher/4-deploy-with-precreated-account | +++ oc rsh -c watcher-api watcher-kuttl-api-0 cat /etc/watcher/watcher.conf.d/02-service-custom.conf' - ' logger.go:42: 07:13:01 | watcher/4-deploy-with-precreated-account | ++ echo ''#'' Service config' - ' logger.go:42: 07:13:01 | watcher/4-deploy-with-precreated-account | + ''['' 1 == 1 '']''' - ' logger.go:42: 07:13:01 | watcher/4-deploy-with-precreated-account | ++ grep -czPo ''\[prometheus_client\]\s+host\s+=\s+metric-storage-prometheus.watcher-kuttl-default.svc\s+port\s+=\s+9090\s+cafile\s+=\s+/etc/pki/ca-trust/extracted/pem/prometheus/internal-ca-bundle.pem''' - ' logger.go:42: 07:13:01 | watcher/4-deploy-with-precreated-account | +++ oc rsh -c watcher-api watcher-kuttl-api-0 cat /etc/watcher/watcher.conf.d/00-default.conf' - ' logger.go:42: 07:13:01 | watcher/4-deploy-with-precreated-account | ++ echo ''[DEFAULT]'' state_path = /var/lib/watcher transport_url = ''rabbit://**********=1'' control_exchange = watcher debug = True log_file = /var/log/watcher/watcher-kuttl-api.log ''#'' empty notification_level means that no notification will be sent notification_level = ''[database]'' connection = ''mysql+pymysql://watcher_test:5de8c0dffb799bdb4516f938bcffc35d@openstack.watcher-kuttl-default.svc/watcher?read_default_file=/etc/my.cnf'' ''[oslo_policy]'' policy_file = /etc/watcher/policy.yaml.sample ''[oslo_messaging_notifications]'' driver = noop ''[oslo_messaging_rabbit]'' rabbit_quorum_queue=true rabbit_transient_quorum_queue=true amqp_durable_queues=true ''[keystone_authtoken]'' memcached_servers=memcached-0.memcached.watcher-kuttl-default.svc:11212 tls_enabled=true memcache_tls_certfile = /etc/pki/tls/certs/mtls.crt memcache_tls_keyfile = /etc/pki/tls/private/mtls.key memcache_tls_cafile = /etc/pki/tls/certs/mtls-ca.crt memcache_tls_enabled = true project_domain_name = Default project_name = service user_domain_name = Default password = pa**********rd username = watcher auth_url = https://keystone-internal.watcher-kuttl-default.svc:5000 interface = internal auth_type = password cafile = /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem ''[watcher_clients_auth]'' project_domain_name = Default project_name = service user_domain_name = Default password = pa**********rd username = watcher auth_url = https://keystone-internal.watcher-kuttl-default.svc:5000 interface = internal auth_type = password cafile = /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem ''[oslo_concurrency]'' lock_path = /var/lib/watcher/tmp ''[watcher_datasources]'' datasources = prometheus ''[cache]'' backend = oslo_cache.memcache_pool memcache_servers=memcached-0.memcached.watcher-kuttl-default.svc:11212 memcache_socket_timeout = 0.5 memcache_pool_connection_get_timeout = 1 enabled=true tls_enabled=true tls_certfile=/etc/pki/tls/certs/mtls.crt tls_keyfile=/etc/pki/tls/private/mtls.key tls_cafile=/etc/pki/tls/certs/mtls-ca.crt memcache_dead_retry = 30 ''[prometheus_client]'' host = metric-storage-prometheus.watcher-kuttl-default.svc port = 9090 cafile = /etc/pki/ca-trust/extracted/pem/prometheus/internal-ca-bundle.pem ''[cinder_client]'' endpoint_type = internal ''[glance_client]'' endpoint_type = internal ''[ironic_client]'' endpoint_type = internal ''[keystone_client]'' interface = internal ''[neutron_client]'' endpoint_type = internal ''[nova_client]'' endpoint_type = internal ''[placement_client]'' interface = internal ''[watcher_cluster_data_model_collectors.compute]'' period = 900 ''[watcher_cluster_data_model_collectors.baremetal]'' period = 900 ''[watcher_cluster_data_model_collectors.storage]'' period = 900' - ' logger.go:42: 07:13:01 | watcher/4-deploy-with-precreated-account | + ''['' 1 == 1 '']''' - ' logger.go:42: 07:13:01 | watcher/4-deploy-with-precreated-account | ++ grep -czPo ''TimeOut 80''' - ' logger.go:42: 07:13:01 | watcher/4-deploy-with-precreated-account | +++ oc rsh -c watcher-api watcher-kuttl-api-0 cat /etc/httpd/conf.d/10-watcher-wsgi-main.conf' - ' logger.go:42: 07:13:02 | watcher/4-deploy-with-precreated-account | ++ echo ''#'' internal vhost watcher-internal.watcher-kuttl-default.svc configuration '''' ServerName watcher-internal.watcher-kuttl-default.svc ''##'' Vhost docroot DocumentRoot ''"/var/www/cgi-bin"'' ''#'' Set the timeout for the watcher-api TimeOut 80 ''##'' Directories, there should at least be a declaration for /var/www/cgi-bin '''' Options -Indexes +FollowSymLinks +MultiViews AllowOverride None Require all granted '''' ''##'' Logging ErrorLog /dev/stdout ServerSignature Off CustomLog /dev/stdout combined ''env=!forwarded'' CustomLog /dev/stdout proxy env=forwarded ''##'' set watcher log level to debug LogLevel debug ''##'' WSGI configuration WSGIApplicationGroup ''%{GLOBAL}'' WSGIDaemonProcess internal display-name=internal group=watcher processes=2 threads=1 user=watcher WSGIProcessGroup internal WSGIScriptAlias / ''"/usr/bin/watcher-api-wsgi"'' '''' ''#'' public vhost watcher-public.watcher-kuttl-default.svc configuration '''' ServerName watcher-public.watcher-kuttl-default.svc ''##'' Vhost docroot DocumentRoot ''"/var/www/cgi-bin"'' ''#'' Set the timeout for the watcher-api TimeOut 80 ''##'' Directories, there should at least be a declaration for /var/www/cgi-bin '''' Options -Indexes +FollowSymLinks +MultiViews AllowOverride None Require all granted '''' ''##'' Logging ErrorLog /dev/stdout ServerSignature Off CustomLog /dev/stdout combined ''env=!forwarded'' CustomLog /dev/stdout proxy env=forwarded ''##'' set watcher log level to debug LogLevel debug ''##'' WSGI configuration WSGIApplicationGroup ''%{GLOBAL}'' WSGIDaemonProcess public display-name=public group=watcher processes=2 threads=1 user=watcher WSGIProcessGroup public WSGIScriptAlias / ''"/usr/bin/watcher-api-wsgi"'' ''''' - ' logger.go:42: 07:13:02 | watcher/4-deploy-with-precreated-account | + ''['' 1 == 1 '']''' - ' logger.go:42: 07:13:03 | watcher/4-deploy-with-precreated-account | running command: [sh -c set -euxo pipefail' - ' oc project watcher-kuttl-default' - ' APIPOD=$(oc get pods -n watcher-kuttl-default -l "service=watcher-api" -ocustom-columns=:metadata.name|grep -v ^$|head -1)' - ' if [ -n "${APIPOD}" ]; then' - ' [ $(echo $(oc rsh -c watcher-api ${APIPOD} cat /etc/watcher/watcher.conf.d/01-global-custom.conf) |grep -c "^# Global config") == 1 ]' - ' [ $(echo $(oc rsh -c watcher-api ${APIPOD} cat /etc/watcher/watcher.conf.d/02-service-custom.conf) |grep -c "^# Service config") == 1 ]' - ' [ $(echo $(oc rsh -c watcher-api ${APIPOD} cat /etc/watcher/watcher.conf.d/00-default.conf) |grep -czPo ''\[prometheus_client\]\s+host\s+=\s+metric-storage-prometheus.watcher-kuttl-default.svc\s+port\s+=\s+9090\s+cafile\s+=\s+/etc/pki/ca-trust/extracted/pem/prometheus/internal-ca-bundle.pem'') == 1 ]' - ' [ $(echo $(oc rsh -c watcher-api ${APIPOD} cat /etc/httpd/conf.d/10-watcher-wsgi-main.conf) |grep -czPo ''TimeOut 80'') == 1 ]' - ' else' - ' exit 1' - ' fi' - ' ]' - ' logger.go:42: 07:13:03 | watcher/4-deploy-with-precreated-account | + oc project watcher-kuttl-default' - ' logger.go:42: 07:13:03 | watcher/4-deploy-with-precreated-account | Already on project "watcher-kuttl-default" on server "https://api.crc.testing:6443".' - ' logger.go:42: 07:13:03 | watcher/4-deploy-with-precreated-account | ++ oc get pods -n watcher-kuttl-default -l service=watcher-api -ocustom-columns=:metadata.name' - ' logger.go:42: 07:13:03 | watcher/4-deploy-with-precreated-account | ++ head -1' - ' logger.go:42: 07:13:03 | watcher/4-deploy-with-precreated-account | ++ grep -v ''^$''' - ' logger.go:42: 07:13:03 | watcher/4-deploy-with-precreated-account | + APIPOD=watcher-kuttl-api-0' - ' logger.go:42: 07:13:03 | watcher/4-deploy-with-precreated-account | + ''['' -n watcher-kuttl-api-0 '']''' - ' logger.go:42: 07:13:03 | watcher/4-deploy-with-precreated-account | ++ grep -c ''^# Global config''' - ' logger.go:42: 07:13:03 | watcher/4-deploy-with-precreated-account | +++ oc rsh -c watcher-api watcher-kuttl-api-0 cat /etc/watcher/watcher.conf.d/01-global-custom.conf' - ' logger.go:42: 07:13:03 | watcher/4-deploy-with-precreated-account | ++ echo ''#'' Global config' - ' logger.go:42: 07:13:03 | watcher/4-deploy-with-precreated-account | + ''['' 1 == 1 '']''' - ' logger.go:42: 07:13:03 | watcher/4-deploy-with-precreated-account | ++ grep -c ''^# Service config''' - ' logger.go:42: 07:13:03 | watcher/4-deploy-with-precreated-account | +++ oc rsh -c watcher-api watcher-kuttl-api-0 cat /etc/watcher/watcher.conf.d/02-service-custom.conf' - ' logger.go:42: 07:13:04 | watcher/4-deploy-with-precreated-account | ++ echo ''#'' Service config' - ' logger.go:42: 07:13:04 | watcher/4-deploy-with-precreated-account | + ''['' 1 == 1 '']''' - ' logger.go:42: 07:13:04 | watcher/4-deploy-with-precreated-account | ++ grep -czPo ''\[prometheus_client\]\s+host\s+=\s+metric-storage-prometheus.watcher-kuttl-default.svc\s+port\s+=\s+9090\s+cafile\s+=\s+/etc/pki/ca-trust/extracted/pem/prometheus/internal-ca-bundle.pem''' - ' logger.go:42: 07:13:04 | watcher/4-deploy-with-precreated-account | +++ oc rsh -c watcher-api watcher-kuttl-api-0 cat /etc/watcher/watcher.conf.d/00-default.conf' - ' logger.go:42: 07:13:04 | watcher/4-deploy-with-precreated-account | ++ echo ''[DEFAULT]'' state_path = /var/lib/watcher transport_url = ''rabbit://**********=1'' control_exchange = watcher debug = True log_file = /var/log/watcher/watcher-kuttl-api.log ''#'' empty notification_level means that no notification will be sent notification_level = ''[database]'' connection = ''mysql+pymysql://watcher_test:5de8c0dffb799bdb4516f938bcffc35d@openstack.watcher-kuttl-default.svc/watcher?read_default_file=/etc/my.cnf'' ''[oslo_policy]'' policy_file = /etc/watcher/policy.yaml.sample ''[oslo_messaging_notifications]'' driver = noop ''[oslo_messaging_rabbit]'' rabbit_quorum_queue=true rabbit_transient_quorum_queue=true amqp_durable_queues=true ''[keystone_authtoken]'' memcached_servers=memcached-0.memcached.watcher-kuttl-default.svc:11212 tls_enabled=true memcache_tls_certfile = /etc/pki/tls/certs/mtls.crt memcache_tls_keyfile = /etc/pki/tls/private/mtls.key memcache_tls_cafile = /etc/pki/tls/certs/mtls-ca.crt memcache_tls_enabled = true project_domain_name = Default project_name = service user_domain_name = Default password = pa**********rd username = watcher auth_url = https://keystone-internal.watcher-kuttl-default.svc:5000 interface = internal auth_type = password cafile = /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem ''[watcher_clients_auth]'' project_domain_name = Default project_name = service user_domain_name = Default password = pa**********rd username = watcher auth_url = https://keystone-internal.watcher-kuttl-default.svc:5000 interface = internal auth_type = password cafile = /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem ''[oslo_concurrency]'' lock_path = /var/lib/watcher/tmp ''[watcher_datasources]'' datasources = prometheus ''[cache]'' backend = oslo_cache.memcache_pool memcache_servers=memcached-0.memcached.watcher-kuttl-default.svc:11212 memcache_socket_timeout = 0.5 memcache_pool_connection_get_timeout = 1 enabled=true tls_enabled=true tls_certfile=/etc/pki/tls/certs/mtls.crt tls_keyfile=/etc/pki/tls/private/mtls.key tls_cafile=/etc/pki/tls/certs/mtls-ca.crt memcache_dead_retry = 30 ''[prometheus_client]'' host = metric-storage-prometheus.watcher-kuttl-default.svc port = 9090 cafile = /etc/pki/ca-trust/extracted/pem/prometheus/internal-ca-bundle.pem ''[cinder_client]'' endpoint_type = internal ''[glance_client]'' endpoint_type = internal ''[ironic_client]'' endpoint_type = internal ''[keystone_client]'' interface = internal ''[neutron_client]'' endpoint_type = internal ''[nova_client]'' endpoint_type = internal ''[placement_client]'' interface = internal ''[watcher_cluster_data_model_collectors.compute]'' period = 900 ''[watcher_cluster_data_model_collectors.baremetal]'' period = 900 ''[watcher_cluster_data_model_collectors.storage]'' period = 900' - ' logger.go:42: 07:13:04 | watcher/4-deploy-with-precreated-account | + ''['' 1 == 1 '']''' - ' logger.go:42: 07:13:04 | watcher/4-deploy-with-precreated-account | ++ grep -czPo ''TimeOut 80''' - ' logger.go:42: 07:13:04 | watcher/4-deploy-with-precreated-account | +++ oc rsh -c watcher-api watcher-kuttl-api-0 cat /etc/httpd/conf.d/10-watcher-wsgi-main.conf' - ' logger.go:42: 07:13:04 | watcher/4-deploy-with-precreated-account | ++ echo ''#'' internal vhost watcher-internal.watcher-kuttl-default.svc configuration '''' ServerName watcher-internal.watcher-kuttl-default.svc ''##'' Vhost docroot DocumentRoot ''"/var/www/cgi-bin"'' ''#'' Set the timeout for the watcher-api TimeOut 80 ''##'' Directories, there should at least be a declaration for /var/www/cgi-bin '''' Options -Indexes +FollowSymLinks +MultiViews AllowOverride None Require all granted '''' ''##'' Logging ErrorLog /dev/stdout ServerSignature Off CustomLog /dev/stdout combined ''env=!forwarded'' CustomLog /dev/stdout proxy env=forwarded ''##'' set watcher log level to debug LogLevel debug ''##'' WSGI configuration WSGIApplicationGroup ''%{GLOBAL}'' WSGIDaemonProcess internal display-name=internal group=watcher processes=2 threads=1 user=watcher WSGIProcessGroup internal WSGIScriptAlias / ''"/usr/bin/watcher-api-wsgi"'' '''' ''#'' public vhost watcher-public.watcher-kuttl-default.svc configuration '''' ServerName watcher-public.watcher-kuttl-default.svc ''##'' Vhost docroot DocumentRoot ''"/var/www/cgi-bin"'' ''#'' Set the timeout for the watcher-api TimeOut 80 ''##'' Directories, there should at least be a declaration for /var/www/cgi-bin '''' Options -Indexes +FollowSymLinks +MultiViews AllowOverride None Require all granted '''' ''##'' Logging ErrorLog /dev/stdout ServerSignature Off CustomLog /dev/stdout combined ''env=!forwarded'' CustomLog /dev/stdout proxy env=forwarded ''##'' set watcher log level to debug LogLevel debug ''##'' WSGI configuration WSGIApplicationGroup ''%{GLOBAL}'' WSGIDaemonProcess public display-name=public group=watcher processes=2 threads=1 user=watcher WSGIProcessGroup public WSGIScriptAlias / ''"/usr/bin/watcher-api-wsgi"'' ''''' - ' logger.go:42: 07:13:04 | watcher/4-deploy-with-precreated-account | + ''['' 1 == 1 '']''' - ' logger.go:42: 07:13:05 | watcher/4-deploy-with-precreated-account | running command: [sh -c set -euxo pipefail' - ' oc project watcher-kuttl-default' - ' APIPOD=$(oc get pods -n watcher-kuttl-default -l "service=watcher-api" -ocustom-columns=:metadata.name|grep -v ^$|head -1)' - ' if [ -n "${APIPOD}" ]; then' - ' [ $(echo $(oc rsh -c watcher-api ${APIPOD} cat /etc/watcher/watcher.conf.d/01-global-custom.conf) |grep -c "^# Global config") == 1 ]' - ' [ $(echo $(oc rsh -c watcher-api ${APIPOD} cat /etc/watcher/watcher.conf.d/02-service-custom.conf) |grep -c "^# Service config") == 1 ]' - ' [ $(echo $(oc rsh -c watcher-api ${APIPOD} cat /etc/watcher/watcher.conf.d/00-default.conf) |grep -czPo ''\[prometheus_client\]\s+host\s+=\s+metric-storage-prometheus.watcher-kuttl-default.svc\s+port\s+=\s+9090\s+cafile\s+=\s+/etc/pki/ca-trust/extracted/pem/prometheus/internal-ca-bundle.pem'') == 1 ]' - ' [ $(echo $(oc rsh -c watcher-api ${APIPOD} cat /etc/httpd/conf.d/10-watcher-wsgi-main.conf) |grep -czPo ''TimeOut 80'') == 1 ]' - ' else' - ' exit 1' - ' fi' - ' ]' - ' logger.go:42: 07:13:05 | watcher/4-deploy-with-precreated-account | + oc project watcher-kuttl-default' - ' logger.go:42: 07:13:05 | watcher/4-deploy-with-precreated-account | Already on project "watcher-kuttl-default" on server "https://api.crc.testing:6443".' - ' logger.go:42: 07:13:05 | watcher/4-deploy-with-precreated-account | ++ oc get pods -n watcher-kuttl-default -l service=watcher-api -ocustom-columns=:metadata.name' - ' logger.go:42: 07:13:05 | watcher/4-deploy-with-precreated-account | ++ head -1' - ' logger.go:42: 07:13:05 | watcher/4-deploy-with-precreated-account | ++ grep -v ''^$''' - ' logger.go:42: 07:13:06 | watcher/4-deploy-with-precreated-account | + APIPOD=watcher-kuttl-api-0' - ' logger.go:42: 07:13:06 | watcher/4-deploy-with-precreated-account | + ''['' -n watcher-kuttl-api-0 '']''' - ' logger.go:42: 07:13:06 | watcher/4-deploy-with-precreated-account | ++ grep -c ''^# Global config''' - ' logger.go:42: 07:13:06 | watcher/4-deploy-with-precreated-account | +++ oc rsh -c watcher-api watcher-kuttl-api-0 cat /etc/watcher/watcher.conf.d/01-global-custom.conf' - ' logger.go:42: 07:13:06 | watcher/4-deploy-with-precreated-account | ++ echo ''#'' Global config' - ' logger.go:42: 07:13:06 | watcher/4-deploy-with-precreated-account | + ''['' 1 == 1 '']''' - ' logger.go:42: 07:13:06 | watcher/4-deploy-with-precreated-account | ++ grep -c ''^# Service config''' - ' logger.go:42: 07:13:06 | watcher/4-deploy-with-precreated-account | +++ oc rsh -c watcher-api watcher-kuttl-api-0 cat /etc/watcher/watcher.conf.d/02-service-custom.conf' - ' logger.go:42: 07:13:06 | watcher/4-deploy-with-precreated-account | ++ echo ''#'' Service config' - ' logger.go:42: 07:13:06 | watcher/4-deploy-with-precreated-account | + ''['' 1 == 1 '']''' - ' logger.go:42: 07:13:06 | watcher/4-deploy-with-precreated-account | ++ grep -czPo ''\[prometheus_client\]\s+host\s+=\s+metric-storage-prometheus.watcher-kuttl-default.svc\s+port\s+=\s+9090\s+cafile\s+=\s+/etc/pki/ca-trust/extracted/pem/prometheus/internal-ca-bundle.pem''' - ' logger.go:42: 07:13:06 | watcher/4-deploy-with-precreated-account | +++ oc rsh -c watcher-api watcher-kuttl-api-0 cat /etc/watcher/watcher.conf.d/00-default.conf' - ' logger.go:42: 07:13:07 | watcher/4-deploy-with-precreated-account | ++ echo ''[DEFAULT]'' state_path = /var/lib/watcher transport_url = ''rabbit://**********=1'' control_exchange = watcher debug = True log_file = /var/log/watcher/watcher-kuttl-api.log ''#'' empty notification_level means that no notification will be sent notification_level = ''[database]'' connection = ''mysql+pymysql://watcher_test:5de8c0dffb799bdb4516f938bcffc35d@openstack.watcher-kuttl-default.svc/watcher?read_default_file=/etc/my.cnf'' ''[oslo_policy]'' policy_file = /etc/watcher/policy.yaml.sample ''[oslo_messaging_notifications]'' driver = noop ''[oslo_messaging_rabbit]'' rabbit_quorum_queue=true rabbit_transient_quorum_queue=true amqp_durable_queues=true ''[keystone_authtoken]'' memcached_servers=memcached-0.memcached.watcher-kuttl-default.svc:11212 tls_enabled=true memcache_tls_certfile = /etc/pki/tls/certs/mtls.crt memcache_tls_keyfile = /etc/pki/tls/private/mtls.key memcache_tls_cafile = /etc/pki/tls/certs/mtls-ca.crt memcache_tls_enabled = true project_domain_name = Default project_name = service user_domain_name = Default password = pa**********rd username = watcher auth_url = https://keystone-internal.watcher-kuttl-default.svc:5000 interface = internal auth_type = password cafile = /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem ''[watcher_clients_auth]'' project_domain_name = Default project_name = service user_domain_name = Default password = pa**********rd username = watcher auth_url = https://keystone-internal.watcher-kuttl-default.svc:5000 interface = internal auth_type = password cafile = /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem ''[oslo_concurrency]'' lock_path = /var/lib/watcher/tmp ''[watcher_datasources]'' datasources = prometheus ''[cache]'' backend = oslo_cache.memcache_pool memcache_servers=memcached-0.memcached.watcher-kuttl-default.svc:11212 memcache_socket_timeout = 0.5 memcache_pool_connection_get_timeout = 1 enabled=true tls_enabled=true tls_certfile=/etc/pki/tls/certs/mtls.crt tls_keyfile=/etc/pki/tls/private/mtls.key tls_cafile=/etc/pki/tls/certs/mtls-ca.crt memcache_dead_retry = 30 ''[prometheus_client]'' host = metric-storage-prometheus.watcher-kuttl-default.svc port = 9090 cafile = /etc/pki/ca-trust/extracted/pem/prometheus/internal-ca-bundle.pem ''[cinder_client]'' endpoint_type = internal ''[glance_client]'' endpoint_type = internal ''[ironic_client]'' endpoint_type = internal ''[keystone_client]'' interface = internal ''[neutron_client]'' endpoint_type = internal ''[nova_client]'' endpoint_type = internal ''[placement_client]'' interface = internal ''[watcher_cluster_data_model_collectors.compute]'' period = 900 ''[watcher_cluster_data_model_collectors.baremetal]'' period = 900 ''[watcher_cluster_data_model_collectors.storage]'' period = 900' - ' logger.go:42: 07:13:07 | watcher/4-deploy-with-precreated-account | + ''['' 1 == 1 '']''' - ' logger.go:42: 07:13:07 | watcher/4-deploy-with-precreated-account | ++ grep -czPo ''TimeOut 80''' - ' logger.go:42: 07:13:07 | watcher/4-deploy-with-precreated-account | +++ oc rsh -c watcher-api watcher-kuttl-api-0 cat /etc/httpd/conf.d/10-watcher-wsgi-main.conf' - ' logger.go:42: 07:13:07 | watcher/4-deploy-with-precreated-account | ++ echo ''#'' internal vhost watcher-internal.watcher-kuttl-default.svc configuration '''' ServerName watcher-internal.watcher-kuttl-default.svc ''##'' Vhost docroot DocumentRoot ''"/var/www/cgi-bin"'' ''#'' Set the timeout for the watcher-api TimeOut 80 ''##'' Directories, there should at least be a declaration for /var/www/cgi-bin '''' Options -Indexes +FollowSymLinks +MultiViews AllowOverride None Require all granted '''' ''##'' Logging ErrorLog /dev/stdout ServerSignature Off CustomLog /dev/stdout combined ''env=!forwarded'' CustomLog /dev/stdout proxy env=forwarded ''##'' set watcher log level to debug LogLevel debug ''##'' WSGI configuration WSGIApplicationGroup ''%{GLOBAL}'' WSGIDaemonProcess internal display-name=internal group=watcher processes=2 threads=1 user=watcher WSGIProcessGroup internal WSGIScriptAlias / ''"/usr/bin/watcher-api-wsgi"'' '''' ''#'' public vhost watcher-public.watcher-kuttl-default.svc configuration '''' ServerName watcher-public.watcher-kuttl-default.svc ''##'' Vhost docroot DocumentRoot ''"/var/www/cgi-bin"'' ''#'' Set the timeout for the watcher-api TimeOut 80 ''##'' Directories, there should at least be a declaration for /var/www/cgi-bin '''' Options -Indexes +FollowSymLinks +MultiViews AllowOverride None Require all granted '''' ''##'' Logging ErrorLog /dev/stdout ServerSignature Off CustomLog /dev/stdout combined ''env=!forwarded'' CustomLog /dev/stdout proxy env=forwarded ''##'' set watcher log level to debug LogLevel debug ''##'' WSGI configuration WSGIApplicationGroup ''%{GLOBAL}'' WSGIDaemonProcess public display-name=public group=watcher processes=2 threads=1 user=watcher WSGIProcessGroup public WSGIScriptAlias / ''"/usr/bin/watcher-api-wsgi"'' ''''' - ' logger.go:42: 07:13:07 | watcher/4-deploy-with-precreated-account | + ''['' 1 == 1 '']''' - ' logger.go:42: 07:13:08 | watcher/4-deploy-with-precreated-account | running command: [sh -c set -euxo pipefail' - ' oc project watcher-kuttl-default' - ' APIPOD=$(oc get pods -n watcher-kuttl-default -l "service=watcher-api" -ocustom-columns=:metadata.name|grep -v ^$|head -1)' - ' if [ -n "${APIPOD}" ]; then' - ' [ $(echo $(oc rsh -c watcher-api ${APIPOD} cat /etc/watcher/watcher.conf.d/01-global-custom.conf) |grep -c "^# Global config") == 1 ]' - ' [ $(echo $(oc rsh -c watcher-api ${APIPOD} cat /etc/watcher/watcher.conf.d/02-service-custom.conf) |grep -c "^# Service config") == 1 ]' - ' [ $(echo $(oc rsh -c watcher-api ${APIPOD} cat /etc/watcher/watcher.conf.d/00-default.conf) |grep -czPo ''\[prometheus_client\]\s+host\s+=\s+metric-storage-prometheus.watcher-kuttl-default.svc\s+port\s+=\s+9090\s+cafile\s+=\s+/etc/pki/ca-trust/extracted/pem/prometheus/internal-ca-bundle.pem'') == 1 ]' - ' [ $(echo $(oc rsh -c watcher-api ${APIPOD} cat /etc/httpd/conf.d/10-watcher-wsgi-main.conf) |grep -czPo ''TimeOut 80'') == 1 ]' - ' else' - ' exit 1' - ' fi' - ' ]' - ' logger.go:42: 07:13:08 | watcher/4-deploy-with-precreated-account | + oc project watcher-kuttl-default' - ' logger.go:42: 07:13:08 | watcher/4-deploy-with-precreated-account | Already on project "watcher-kuttl-default" on server "https://api.crc.testing:6443".' - ' logger.go:42: 07:13:08 | watcher/4-deploy-with-precreated-account | ++ oc get pods -n watcher-kuttl-default -l service=watcher-api -ocustom-columns=:metadata.name' - ' logger.go:42: 07:13:08 | watcher/4-deploy-with-precreated-account | ++ head -1' - ' logger.go:42: 07:13:08 | watcher/4-deploy-with-precreated-account | ++ grep -v ''^$''' - ' logger.go:42: 07:13:08 | watcher/4-deploy-with-precreated-account | + APIPOD=watcher-kuttl-api-0' - ' logger.go:42: 07:13:08 | watcher/4-deploy-with-precreated-account | + ''['' -n watcher-kuttl-api-0 '']''' - ' logger.go:42: 07:13:08 | watcher/4-deploy-with-precreated-account | ++ grep -c ''^# Global config''' - ' logger.go:42: 07:13:08 | watcher/4-deploy-with-precreated-account | +++ oc rsh -c watcher-api watcher-kuttl-api-0 cat /etc/watcher/watcher.conf.d/01-global-custom.conf' - ' logger.go:42: 07:13:08 | watcher/4-deploy-with-precreated-account | ++ echo ''#'' Global config' - ' logger.go:42: 07:13:08 | watcher/4-deploy-with-precreated-account | + ''['' 1 == 1 '']''' - ' logger.go:42: 07:13:08 | watcher/4-deploy-with-precreated-account | ++ grep -c ''^# Service config''' - ' logger.go:42: 07:13:08 | watcher/4-deploy-with-precreated-account | +++ oc rsh -c watcher-api watcher-kuttl-api-0 cat /etc/watcher/watcher.conf.d/02-service-custom.conf' - ' logger.go:42: 07:13:09 | watcher/4-deploy-with-precreated-account | ++ echo ''#'' Service config' - ' logger.go:42: 07:13:09 | watcher/4-deploy-with-precreated-account | + ''['' 1 == 1 '']''' - ' logger.go:42: 07:13:09 | watcher/4-deploy-with-precreated-account | ++ grep -czPo ''\[prometheus_client\]\s+host\s+=\s+metric-storage-prometheus.watcher-kuttl-default.svc\s+port\s+=\s+9090\s+cafile\s+=\s+/etc/pki/ca-trust/extracted/pem/prometheus/internal-ca-bundle.pem''' - ' logger.go:42: 07:13:09 | watcher/4-deploy-with-precreated-account | +++ oc rsh -c watcher-api watcher-kuttl-api-0 cat /etc/watcher/watcher.conf.d/00-default.conf' - ' logger.go:42: 07:13:09 | watcher/4-deploy-with-precreated-account | ++ echo ''[DEFAULT]'' state_path = /var/lib/watcher transport_url = ''rabbit://**********=1'' control_exchange = watcher debug = True log_file = /var/log/watcher/watcher-kuttl-api.log ''#'' empty notification_level means that no notification will be sent notification_level = ''[database]'' connection = ''mysql+pymysql://watcher_test:5de8c0dffb799bdb4516f938bcffc35d@openstack.watcher-kuttl-default.svc/watcher?read_default_file=/etc/my.cnf'' ''[oslo_policy]'' policy_file = /etc/watcher/policy.yaml.sample ''[oslo_messaging_notifications]'' driver = noop ''[oslo_messaging_rabbit]'' rabbit_quorum_queue=true rabbit_transient_quorum_queue=true amqp_durable_queues=true ''[keystone_authtoken]'' memcached_servers=memcached-0.memcached.watcher-kuttl-default.svc:11212 tls_enabled=true memcache_tls_certfile = /etc/pki/tls/certs/mtls.crt memcache_tls_keyfile = /etc/pki/tls/private/mtls.key memcache_tls_cafile = /etc/pki/tls/certs/mtls-ca.crt memcache_tls_enabled = true project_domain_name = Default project_name = service user_domain_name = Default password = pa**********rd username = watcher auth_url = https://keystone-internal.watcher-kuttl-default.svc:5000 interface = internal auth_type = password cafile = /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem ''[watcher_clients_auth]'' project_domain_name = Default project_name = service user_domain_name = Default password = pa**********rd username = watcher auth_url = https://keystone-internal.watcher-kuttl-default.svc:5000 interface = internal auth_type = password cafile = /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem ''[oslo_concurrency]'' lock_path = /var/lib/watcher/tmp ''[watcher_datasources]'' datasources = prometheus ''[cache]'' backend = oslo_cache.memcache_pool memcache_servers=memcached-0.memcached.watcher-kuttl-default.svc:11212 memcache_socket_timeout = 0.5 memcache_pool_connection_get_timeout = 1 enabled=true tls_enabled=true tls_certfile=/etc/pki/tls/certs/mtls.crt tls_keyfile=/etc/pki/tls/private/mtls.key tls_cafile=/etc/pki/tls/certs/mtls-ca.crt memcache_dead_retry = 30 ''[prometheus_client]'' host = metric-storage-prometheus.watcher-kuttl-default.svc port = 9090 cafile = /etc/pki/ca-trust/extracted/pem/prometheus/internal-ca-bundle.pem ''[cinder_client]'' endpoint_type = internal ''[glance_client]'' endpoint_type = internal ''[ironic_client]'' endpoint_type = internal ''[keystone_client]'' interface = internal ''[neutron_client]'' endpoint_type = internal ''[nova_client]'' endpoint_type = internal ''[placement_client]'' interface = internal ''[watcher_cluster_data_model_collectors.compute]'' period = 900 ''[watcher_cluster_data_model_collectors.baremetal]'' period = 900 ''[watcher_cluster_data_model_collectors.storage]'' period = 900' - ' logger.go:42: 07:13:09 | watcher/4-deploy-with-precreated-account | + ''['' 1 == 1 '']''' - ' logger.go:42: 07:13:09 | watcher/4-deploy-with-precreated-account | ++ grep -czPo ''TimeOut 80''' - ' logger.go:42: 07:13:09 | watcher/4-deploy-with-precreated-account | +++ oc rsh -c watcher-api watcher-kuttl-api-0 cat /etc/httpd/conf.d/10-watcher-wsgi-main.conf' - ' logger.go:42: 07:13:09 | watcher/4-deploy-with-precreated-account | ++ echo ''#'' internal vhost watcher-internal.watcher-kuttl-default.svc configuration '''' ServerName watcher-internal.watcher-kuttl-default.svc ''##'' Vhost docroot DocumentRoot ''"/var/www/cgi-bin"'' ''#'' Set the timeout for the watcher-api TimeOut 80 ''##'' Directories, there should at least be a declaration for /var/www/cgi-bin '''' Options -Indexes +FollowSymLinks +MultiViews AllowOverride None Require all granted '''' ''##'' Logging ErrorLog /dev/stdout ServerSignature Off CustomLog /dev/stdout combined ''env=!forwarded'' CustomLog /dev/stdout proxy env=forwarded ''##'' set watcher log level to debug LogLevel debug ''##'' WSGI configuration WSGIApplicationGroup ''%{GLOBAL}'' WSGIDaemonProcess internal display-name=internal group=watcher processes=2 threads=1 user=watcher WSGIProcessGroup internal WSGIScriptAlias / ''"/usr/bin/watcher-api-wsgi"'' '''' ''#'' public vhost watcher-public.watcher-kuttl-default.svc configuration '''' ServerName watcher-public.watcher-kuttl-default.svc ''##'' Vhost docroot DocumentRoot ''"/var/www/cgi-bin"'' ''#'' Set the timeout for the watcher-api TimeOut 80 ''##'' Directories, there should at least be a declaration for /var/www/cgi-bin '''' Options -Indexes +FollowSymLinks +MultiViews AllowOverride None Require all granted '''' ''##'' Logging ErrorLog /dev/stdout ServerSignature Off CustomLog /dev/stdout combined ''env=!forwarded'' CustomLog /dev/stdout proxy env=forwarded ''##'' set watcher log level to debug LogLevel debug ''##'' WSGI configuration WSGIApplicationGroup ''%{GLOBAL}'' WSGIDaemonProcess public display-name=public group=watcher processes=2 threads=1 user=watcher WSGIProcessGroup public WSGIScriptAlias / ''"/usr/bin/watcher-api-wsgi"'' ''''' - ' logger.go:42: 07:13:09 | watcher/4-deploy-with-precreated-account | + ''['' 1 == 1 '']''' - ' logger.go:42: 07:13:09 | watcher/4-deploy-with-precreated-account | test step completed 4-deploy-with-precreated-account' - ' logger.go:42: 07:13:09 | watcher/5-cleanup-watcher | starting test step 5-cleanup-watcher' - ' logger.go:42: 07:13:09 | watcher/5-cleanup-watcher | running command: [sh -c set -ex' - ' oc exec -n watcher-kuttl-default openstackclient -- openstack service list -f value -c Name -c Type | [ $(grep -c ^watcher) == 0 ]' - ' ]' - ' logger.go:42: 07:13:09 | watcher/5-cleanup-watcher | + oc exec -n watcher-kuttl-default openstackclient -- openstack service list -f value -c Name -c Type' - ' logger.go:42: 07:13:09 | watcher/5-cleanup-watcher | ++ grep -c ''^watcher''' - ' logger.go:42: 07:13:13 | watcher/5-cleanup-watcher | + ''['' 1 == 0 '']''' - ' logger.go:42: 07:13:14 | watcher/5-cleanup-watcher | running command: [sh -c set -ex' - ' oc exec -n watcher-kuttl-default openstackclient -- openstack service list -f value -c Name -c Type | [ $(grep -c ^watcher) == 0 ]' - ' ]' - ' logger.go:42: 07:13:14 | watcher/5-cleanup-watcher | + oc exec -n watcher-kuttl-default openstackclient -- openstack service list -f value -c Name -c Type' - ' logger.go:42: 07:13:14 | watcher/5-cleanup-watcher | ++ grep -c ''^watcher''' - ' logger.go:42: 07:13:16 | watcher/5-cleanup-watcher | + ''['' 0 == 0 '']''' - ' logger.go:42: 07:13:16 | watcher/5-cleanup-watcher | test step completed 5-cleanup-watcher' - ' logger.go:42: 07:13:16 | watcher | skipping kubernetes event logging' - === CONT kuttl/harness/deps - ' logger.go:42: 07:13:16 | deps | Ignoring infra.yaml as it does not match file name regexp: ^(\d+)-(?:[^\.]+)(?:\.yaml)?$' - ' logger.go:42: 07:13:16 | deps | Ignoring keystone.yaml as it does not match file name regexp: ^(\d+)-(?:[^\.]+)(?:\.yaml)?$' - ' logger.go:42: 07:13:16 | deps | Ignoring kustomization.yaml as it does not match file name regexp: ^(\d+)-(?:[^\.]+)(?:\.yaml)?$' - ' logger.go:42: 07:13:16 | deps | Ignoring namespace.yaml as it does not match file name regexp: ^(\d+)-(?:[^\.]+)(?:\.yaml)?$' - ' logger.go:42: 07:13:16 | deps | Ignoring telemetry.yaml as it does not match file name regexp: ^(\d+)-(?:[^\.]+)(?:\.yaml)?$' - ' logger.go:42: 07:13:16 | deps | Skipping creation of user-supplied namespace: watcher-kuttl-default' - ' logger.go:42: 07:13:16 | deps | skipping kubernetes event logging' - === NAME kuttl - ' harness.go:406: run tests finished' - ' harness.go:514: cleaning up' - ' harness.go:571: removing temp folder: ""' - '--- FAIL: kuttl (899.85s)' - ' --- FAIL: kuttl/harness (0.00s)' - ' --- PASS: kuttl/harness/common (0.01s)' - ' --- PASS: kuttl/harness/watcher-notification (76.18s)' - ' --- PASS: kuttl/harness/watcher-topology (33.98s)' - ' --- PASS: kuttl/harness/watcher-tls-certs-change (43.50s)' - ' --- PASS: kuttl/harness/watcher-tls (156.00s)' - ' --- FAIL: kuttl/harness/watcher-rmquser (300.20s)' - ' --- PASS: kuttl/harness/watcher-api-scaling (60.38s)' - ' --- PASS: kuttl/harness/watcher-cinder (135.83s)' - ' --- PASS: kuttl/harness/watcher (93.74s)' - ' --- PASS: kuttl/harness/deps (0.00s)' - FAIL 2026-01-22 07:13:17,253 p=40075 u=zuul n=ansible | NO MORE HOSTS LEFT ************************************************************* 2026-01-22 07:13:17,254 p=40075 u=zuul n=ansible | PLAY RECAP ********************************************************************* 2026-01-22 07:13:17,254 p=40075 u=zuul n=ansible | controller : ok=2 changed=0 unreachable=0 failed=1 skipped=1 rescued=0 ignored=0 2026-01-22 07:13:17,255 p=40075 u=zuul n=ansible | Thursday 22 January 2026 07:13:17 +0000 (0:15:00.834) 0:15:01.227 ****** 2026-01-22 07:13:17,255 p=40075 u=zuul n=ansible | =============================================================================== 2026-01-22 07:13:17,255 p=40075 u=zuul n=ansible | run kuttl test suite from operator Makefile --------------------------- 900.83s 2026-01-22 07:13:17,255 p=40075 u=zuul n=ansible | run_hook : Loop on hooks for pre_kuttl_from_operator -------------------- 0.15s 2026-01-22 07:13:17,255 p=40075 u=zuul n=ansible | run_hook : Assert single hooks are all mappings ------------------------- 0.10s 2026-01-22 07:13:17,255 p=40075 u=zuul n=ansible | run_hook : Assert parameters are valid ---------------------------------- 0.07s 2026-01-22 07:13:17,255 p=40075 u=zuul n=ansible | Run hooks before running kuttl tests ------------------------------------ 0.04s 2026-01-22 07:13:17,255 p=40075 u=zuul n=ansible | Thursday 22 January 2026 07:13:17 +0000 (0:15:00.834) 0:15:01.225 ****** 2026-01-22 07:13:17,255 p=40075 u=zuul n=ansible | =============================================================================== 2026-01-22 07:13:17,255 p=40075 u=zuul n=ansible | ansible.builtin.command ----------------------------------------------- 900.83s 2026-01-22 07:13:17,255 p=40075 u=zuul n=ansible | run_hook ---------------------------------------------------------------- 0.31s 2026-01-22 07:13:17,255 p=40075 u=zuul n=ansible | ansible.builtin.include_role -------------------------------------------- 0.04s 2026-01-22 07:13:17,255 p=40075 u=zuul n=ansible | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2026-01-22 07:13:17,255 p=40075 u=zuul n=ansible | total ----------------------------------------------------------------- 901.18s